The capability of following instructions is a key dimension for AI systems. Therefore, in NLP, instruction tuning -- the process of training language models to follow natural language instructions -- has become a fundamental component of the model development pipeline. This tutorial addresses three critical questions within the field: (1) What are the current focal points in instruction tuning research? (2) What are the best practices in training an instruction-following model? (3) What new challenges have emerged? To answer these questions, the tutorial presents a systematic overview of recent advances in instruction tuning. It covers different stages in model training: supervised fine-tuning, preference optimization, and reinforcement learning. It introduces scalable strategies for building high-quality instruction data, discusses common criteria for evaluating instruction-following models, and explores how we should interpret the instruction-following behavior of LLMs. The audience will gain a comprehensive understanding of cutting-edge trends in instruction tuning and insights into promising directions for future research.