beandeau>

Le Colloque > Résumés > Godwin-Jones Robert

Generative AI and L2 pragmatics
Robert Godwin-Jones  1@  
1 : Virginia Commonwealth University

The ability of generative AI to produce language that closely resembles human-produced speech has led to claims that AI chatbots “facilitate an authentic, interactional language learning environment” (Chiu et al., 2023), that AI use is “essential for promoting cultural sensitivity, intercultural competency, and global awareness” (Anis, 2023), and that AI-based VR supplies “the benefits of in-country immersion programs without the hassle” (Divekar, 2022). The assumption is that AI output is linguistically and culturally authentic enough that it could substitute in language learning settings for human interlocutors or could even provide similar benefits to a study abroad experience. Such a view ignores how AI systems reproduce language and the limitations of that process for the linguistic features and the cultural content of the resulting output. The mathematical model of language in AI lacks the sociocultural grounding humans have through sensorimotor interactions and from simply living in the real world. Studies of AI's capabilities to engage in pragmatically effective language use have shown significant limitations (Lee & Wang, 2022; Su & Goslar, 2023). While AI systems can gain pragmalinguistic knowledge and learn appropriate formulaic sequences through the verbal exchanges in their training data (politeness conventions, for example), they have proven to be much less effective in sociopragmatic engagement, that is, in generating contextually acceptable speech reflecting an interlocutor's state of mind (Chen et al., 2024). 

Recent studies of the pragmatic language abilities of ChatGPT use the well-known Gricean maxims for guiding effective communication in social situations (supporting Grice's “Cooperation Principle”; Grice, 1989). Giannakidou and Mari (2024) found that ChatGPT violates the Gricean principle of Quality, which demands that speakers be truthful. Because AI systems have no real-world understanding of the texts they produce, they are prone to inaccuracies and even false information. That possibility also can lead to violation of the Gricean maxim of Relevance (Harnad, 2024). Studies have shown how ChatGPT also violates the Gricean principle of Quantity (give only as much information as needed) as the output is often verbose and repetitive (Barattieri di San Pietro et al, 2023). Also suspect is the maxim of Manner (be clear and straightforward), as ChatGPT is often overly polite and formal (Chen et al., 2024; Dynel, 2023).

AI systems will inevitably improve through larger datasets and integrated multimedia. However, those measures will not substitute for the human experience of negotiating common ground linguistically and culturally in social interactions (Barattieri di San Pietro, 2023) and therefore the ability to deal with nuanced pragmatic scenarios. Through published research on the topic and the presenter's study of ChatGPT's performance on discourse completion tasks, this presentation will examine AI's pragmatic abilities. 

References

Anis, M. (2023). Leveraging Artificial Intelligence for Inclusive English Language Teaching: Strategies and Implications for Learner Diversity. Journal of Multidisciplinary Educational Research, 12(6), 54–70.

Atari, M., Xue, M.J., Park, P.S., Blasi, D. and Henrich, J. (2023). Which Humans?. PsyArXiv Preprints. https://psyarxiv.com/5b26t/

Barattieri di San Pietro, C., Frau, F., Mangiaterra, V., & Bambini, V. (2023), The pragmatic profile of ChatGPT: Assessing the communicative skills of a conversational agent, Sistemi Intelligenti, 35(2), 379-399 

Cao, Y., Zhou, L., Lee, S., Cabello, L., Chen, M., & Hershcovich, D. (2023). Assessing cross-cultural alignment between ChatGPT and human societies: An empirical study. arXiv preprint arXiv:2303.17466.

Chen, X., Li, J., & Ye, Y. (2024). A feasibility study for the application of AI-generated conversations in pragmatic analysis. Journal of Pragmatics, 223, 14-30 

Chiu, T.K.F., Xia, Q., Zhou, X., Chai, C.S., & Cheng, M. (2023). Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education. Computers and Education: Artificial Intelligence, 4, 100118.

Divekar, R.R., Drozdal, J., Chabot, S., Zhou, Y., Su, H., Chen, Y., Zhu, H., Hendler, J.A. & Braasch, J., (2022). Foreign language acquisition via artificial intelligence and extended reality: design and evaluation. Computer Assisted Language Learning, 35(9), pp.2332-2360.

Dynel, M. (2023). Lessons in linguistics with ChatGPT: Metapragmatics, metacommunication, metadiscourse and metalanguage in human-AI interactions. Language & Communication93, 107-124.

Giannakidou, A., & Mari, A. (2024). The Human and the Mechanical: logos, truthfulness, and ChatGPT. arXiv preprint arXiv:2402.01267.Lee, S. H., & Wang, S. (2023). Do language models know how to be polite?. Proceedings of the Society for Computation in Linguistics, 6(1), 375-378.

Grice, P. (1989). Studies in the Way of Words. Harvard University Press.

Gubelmann, R. (2024). Large Language Models, Agency, and Why Speech Acts are Beyond Them (For Now)–A Kantian-Cum-Pragmatist Case. Philosophy & Technology, 37(1), 32.

Harnad, S. (2024). Language Writ Large: LLMs, ChatGPT, Grounding, Meaning and Understanding. arXiv preprint arXiv:2402.02243.

Lee, S. H., & Wang, S. (2023). Do language models know how to be polite?. Proceedings of the Society for Computation in Linguistics, 6(1), 375-378.

Su, D. & Goslar, K. (2023, October 19-21). Evaluating pragmatic competence of Artificial Intelligence with the Lens concept: ChatGPT-4 for Chinese L2 teaching [Conference presentation abstract]. 20th Annual Technology for Second Language Learning Conference, Ames, IA, United States.


Chargement... Chargement...