Thursday, April 24 (between 11:00 a.m. and 12:00 a.m.) an operation is planned on the database.
which may cause disturbances on Sciencesconf |
|
The Conference > Plenary talksClick on the image for a summary of the plenary session
▼ Sal CONSOLINarrative Inquiry: Towards a Holistic Approach Rooted in Reflexivity and Life Capital Narrative inquiry represents an indispensable methodological approach in the field of applied linguistics and language learning research, as it highlights a deeply human-centered perspective (Barkhuizen & Consoli, 2021). It allows for the examination of linguistic identity, motivation, and agency through temporal, spatial, and social parameters (Johnson & Golombek, 2002; Pavlenko, 2007; Benson, 2014). Reflexivity, as a methodological approach, plays a central role, ensuring authenticity, ecological validity, and ethical engagement with data, particularly participants' narratives. It enables researchers to question their own positionality and ensure the credibility of their interpretations. Adopting this approach offers researchers the opportunity to better understand the complexity of linguistic trajectories and to highlight the diversity of human experiences. In this context, this keynote presentation emphasizes the concept of life capital (Consoli, 2022) as a crucial extension of reflexivity in narrative inquiry. By integrating memories, emotions, dispositions, and socio-historical experiences, life capital illuminates individuals' language trajectories and strengthens the ecological validity of research. This approach transcends traditional sociocultural frameworks (Bourdieu, 1986) and offers a more nuanced and dynamic understanding of language learning processes. Although methodological and ethical challenges remain, particularly concerning scientific rigor and publication constraints (De Costa et al., 2021), this presentation advocates for a narrative inquiry rooted in reflexivity and life capital. It thus underscores the need for methodological innovations aimed at ensuring a deeper and more authentic understanding of the empirical realities specific to our research. ▼ Kären FORTLarge language models, situated tools Large Language Models (LLMs) emerged six years ago and have been used daily by the general public since the launch of ChatGPT at the end of 2022. They have completely revolutionized the field of Natural Language Processing (NLP), bringing it to the forefront and into the hands of millions of users. Now, these are no longer research tools but products, advertised rather than published. In this presentation, I will show that while LLMs are marketed as universal, they are actually situated tools, produced in a highly biased framework that raises several ethical issues related to their origin, particularly problems of stereotypes and conflicts of interest. I will present the work we have done, notably with Aurélie Névéol and Fanny Ducel (LISN), to assess the exact extent of these problems and propose a more systemic reflection on the subject. ▼ Thierry POIBEAUNatural and Artificial: When Machines Write Poems... Since the advent of computers (and even before!), extensive research has been conducted on the automatic generation of poetry in all its forms (haikus, sonnets, free verse, etc.). Why this interest? Probably for a multitude of reasons, but the fundamental point is the free, creative, and timeless essence of poetry (in contrast to automatic translation or information retrieval, flagship applications of natural language processing, but governed by a strict framework). The generation of poetry has also led to numerous literary and artistic collaborations, fostering dialogue between different disciplines. In this presentation, I will revisit some recent experiments and demonstrate their practical value, including for educational purposes. Finally, on a theoretical level, I will attempt to show how these experiments raise fundamental questions about the nature of creativity (can we say that a computer, or an artificial system, is creative?).
|
Online user: 4 | Privacy | Accessibility |
![]() ![]() |