Advancing Language Models through Instruction Tuning:
Recent Progress and Challenges

EMNLP 2025 Tutorial
9:00 - 12:30, November 8, 2025 • Suzhou, China

Photo by 4045 from Getty Images

Abstract

The capability of following instructions is a key dimension for AI systems. Therefore, in NLP, instruction tuning -- the process of training language models to follow natural language instructions -- has become a fundamental component of the model development pipeline. This tutorial addresses three critical questions within the field: (1) What are the current focal points in instruction tuning research? (2) What are the best practices in training an instruction-following model? (3) What new challenges have emerged? To answer these questions, the tutorial presents a systematic overview of recent advances in instruction tuning. It covers different stages in model training: supervised fine-tuning, preference optimization, and reinforcement learning. It introduces scalable strategies for building high-quality instruction data, discusses common criteria for evaluating instruction-following models, and explores how we should interpret the instruction-following behavior of LLMs. The audience will gain a comprehensive understanding of cutting-edge trends in instruction tuning and insights into promising directions for future research.

Schedule (tentative)

Time Session Speaker
9:00 - 9:10 Introduction Meng Jiang
9:10 - 9:50 Evaluation Criteria for Instruction-Following Models Renze Lou
9:50 - 10:30 How to Collect High-Quality Instruction Data Zhihan Zhang
10:30 - 11:00 Coffee Break -
11:00 - 11:40 How to Train LLMs to Follow Instructions Fangkai Jiao
11:40 - 12:10 Explaining the Instruction-Following Behavior of LLMs Wenpeng Yin
12:10 - 12:30 Future Directions and Q&A Meng Jiang