In this short paper we present the results of four experiments assessing various degree of morphosyntactic and semantic linguistic competence in three very large language models (LLMs), namely davinci (GPT-3/ChatGPT), davinci-002 and davinci-003 (GPT-3.5 with different training options). We focused on (i) acceptability, (ii) complexity and (iii) coherence judgments on 7-point Likert scales and on (iv) syntactic development by means of a forced choice task. The datasets used are taken from available test-sets presented in shared tasks by the NLP community or from linguistic tests. The results suggest that, despite a rather good performance overall, these LLMs cannot be considered competence models since they do not qualify neither as descriptively nor explanatorily adequate
Modelli generativi e sintassi generativa
Cristiano Chesi
Writing – Original Draft Preparation
;
2023-01-01
Abstract
In this short paper we present the results of four experiments assessing various degree of morphosyntactic and semantic linguistic competence in three very large language models (LLMs), namely davinci (GPT-3/ChatGPT), davinci-002 and davinci-003 (GPT-3.5 with different training options). We focused on (i) acceptability, (ii) complexity and (iii) coherence judgments on 7-point Likert scales and on (iv) syntactic development by means of a forced choice task. The datasets used are taken from available test-sets presented in shared tasks by the NLP community or from linguistic tests. The results suggest that, despite a rather good performance overall, these LLMs cannot be considered competence models since they do not qualify neither as descriptively nor explanatorily adequateI documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.