Nouha Dziri

I’m a research scientist at Allen Institute for AI (AI2) working with Yejin Choi. Currently, I co-lead the safety and post-training effort at Ai2 to build (OLMo): a highly capable and truly open LLM to advance AI.
Besides this, I work on understanding the limits of Transformers and their inner workings. Check out “Faith and Fate” to learn about the limits of Transformers on reasoning tasks. I also work on studying alignment in LLMs; checkout “Roadmap to Pluralistic Alignment”, “Finegrained RLHF”, and “RewardBench” the first evaluation tool for reward models..
My work has been featured in TechCrunch, LeMonde, The Economist, Science News and QuantaMagazine.
I was fortunate to work with brilliant researchers in the field. I have worked with Siva Reddy at Mila/McGill, with Hannah Rashkin, Tal Linzen, David Reitter, Diyi Yang, and Tom Kwiatkowski at Google Research NYC and have worked with Alessandro Sordoni, and Goeff Gordon at Microsoft Research Montreal.
News
Jul 2025 | Invited lecture about LLM reasoning at the Armenian LLM Summer School 2025 ![]() |
---|---|
Jul 2025 | Invited talk at the Apple Reasoning and Planning Workshop in Cupertino. |
Jul 2025 | Invited talk & panel at the Data in Generative Models Workshop at ICML 2025 ![]() |
Jul 2025 | Invited talk & panel at the Computer Use Agents Workshop at ICML 2025 ![]() |
Jul 2025 | Invited talk & panel at the Cross Future AI & Technology Summit in Vancouver ![]() |
Jul 2025 | ![]() ![]() |
Jun 2025 | ![]() |
May 2025 | Invited talk and panel at the International Symposium on Trustworthy Foundation Models (MBZUAI) |
May 2025 | ![]() |
Apr 2025 | ![]() |
Feb 2025 | ![]() |
Feb 2025 | Honored to have been part of the Paris AI Action Summit ![]() ![]() |