Nouha Dziri

I’m a research scientist at Allen Institute for AI (AI2) working with Yejin Choi. Currently, I co-lead the safety and post-training effort at Ai2 to build (OLMo): a highly capable and truly open LLM to advance AI.
Besides this, I work on understanding the limits of Transformers and their inner workings. Check out “Faith and Fate” to learn about the limits of Transformers on reasoning tasks. I also work on studying alignment in LLMs; checkout “Roadmap to Pluralistic Alignment”, “Finegrained RLHF”, and “RewardBench” the first evaluation tool for reward models..
My work has been featured in TechCrunch, LeMonde, The Economist, Science News and QuantaMagazine.
I was fortunate to work with brilliant researchers in the field. I have worked with Siva Reddy at Mila/McGill, with Hannah Rashkin, Tal Linzen, David Reitter, Diyi Yang, and Tom Kwiatkowski at Google Research NYC and have worked with Alessandro Sordoni, and Goeff Gordon at Microsoft Research Montreal.
News
Jan 2025 | ![]() |
---|---|
Dec 2024 | System 2 Reasoning at Scale workshop (NeurIPS 2024) was a success ![]() |
Dec 2024 | Invited talk “In-Context Learning in LLMs: Potential and Limits” at the Language Gamification Workshop @ NeurIPS 2024 ![]() |
Dec 2024 | Invited as a panelist at the Meta-Generation Algorithms for Large Language Models Tutorial at NeurIPS 2024 ![]() |
Oct 2024 | ![]() ![]() |
Sep 2024 | ![]() ![]() |
Sep 2024 | ![]() ![]() |
Aug 2024 | ![]() |
Jul 2024 | Check out our ![]() ![]() |
Jul 2024 | New red-teaming method ![]() |
Jul 2024 | Check out my interview with Science News Magazine about “LLMs reasoning skills” featuring “Faith and Fate” and “Generative AI Paradox”. |
Jul 2024 | I will serve as a Demo Chair for NAACL 2025. |