Nouha Dziri


I’m a research scientist at Allen Institute for AI working with Yejin Choi and the Mosaic team. Prior to this, I earned my PhD in 2022 from the University of Alberta and the Alberta Machine Intelligence Institute where I worked on reducing hallucination in conversational language models. Currently, my work revolves around three main axes:

  • Science of LMs: Understanding the limits of Transformers and their inner workings.
  • Innovation with learning: Building smaller LMs that can learn more efficiently.
  • Social impact of LMs: Better aligning LMs with human values and ethical principles.

In the past, I was fortunate to work with brilliant researchers in the field. I have worked with Siva Reddy at Mila/McGill, with Hannah Rashkin, Tal Linzen, David Reitter, Diyi Yang, and Tom Kwiatkowski at Google Research NYC and have worked with Alessandro Sordoni, and Goeff Gordon at Microsoft Research Montreal.


Nov 2023 Invited Talk: Presented “Faith and Fate” & “Generative AI Paradox” at LLM evaluation workshop at The Alan Turing Institute.
Nov 2023 Invited Talk: Presented “Faith and Fate” in ILCC CDT/NLP seminar, University of Edinburgh.
Nov 2023 Invited Talk: Presented “Faith and Fate” at SAIL workshop on fundamental limits of LLMs.
Nov 2023 New paper :mega: “The Generative AI Paradox: What It Can Create, It May Not Understand” is out. [Paper]
Oct 2023 New paper :mega: “Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement” is out. [Paper][Code]
Oct 2023 Invited Talk: Presented “Faith and Fate” at the University of Pittisburgh
Oct 2023 2 papers accepted at EMNLP.
Sep 2023 Invited Talk: Presented “Faith and Fate” at the Formal Languages and Neural Networks Seminar [Video]
Sep 2023 3 papers accepted at NeurIPS. See you in New Orleans :airplane:
Jun 2023 New paper :mega: “Fine-Grained Human Feedback Gives Better Rewards for Language Model Training” is out. [Paper] [Code/Data] (NeurIPS Spotlight 2023)
May 2023 New paper :mega: :sparkles: “Faith and Fate: Limits of Transformers in Compositionality” :sparkles: is out. [Paper][Code][Blog] (NeurIPS Spotlight 2023)
Mar 2023 New paper :mega: “Self-Refine: Iterative Refinement with Self-Feedback” :speech_balloon: is out. [Paper][Website][Demo] (NeurIPS 2023)