Post

Oxford LLMs Materials

Oxford LLMs Materials

This collection brings together lecture slides, workshop materials, and practical guides from our past iterations of the “Oxford LLMs” summer school for social sciences. The content spans foundational concepts to advanced applications—from transformer architectures and model interpretability to fine-tuning techniques and LLM agent systems. Whether you’re exploring core NLP principles or specialized implementations like retrieval-augmented generation, these materials provide both theoretical grounding and hands-on guidance. All resources are openly available for researchers, practitioners, and students working with LLMs in their research!

2024

Building on the 2023 program, the materials in 2024 feature both the fundamentals of LLMs, applied tutorials, and recent advancements. Particularly, the program delves deep into foundational concepts in model architecture and evaluation. Subsequent lectures explore agent-based systems and advanced fine-tuning techniques. The practical components address specialized applications including retrieval-augmented generation for social science research, implementation of observability tools, and workflow development for LLM agents. Technical deep dives cover the Gemini architecture and practical considerations for self-hosting models, providing end-to-end perspectives on contemporary LLM applications. Grigory Sapunov, Tatiana Shavrina, and Ilya Boytsov developed the lecture materials, with seminar contributions from Atita Arora, John Githuly, Christian Silva, and Ciera Fowler. See the full list of materials below:

Lectures

Seminars

2023

The 2023 program covers fundamental and applied aspects of transformer-based language models, tracing their evolution from the 2017 seminal “Attention is all you need” paper to contemporary systems like Chat-GPT. Lectures created by Elena Voita examine critical challenges including model bias, interpretability methods, and alignment techniques. Practical seminars designed by Ilya Boytsov provide hands-on experience with transformer models, from basic fine-tuning approaches to advanced methods like parameter-efficient adaptation. Additional sessions explore specialized applications including prompt engineering, classic NLP pipelines, and reinforcement learning for model detoxification. See the full list of materials below:

Lectures

The following lecture materials were and created by Elena Voita.

Seminars

The following workshop materials were designed and implemented by Ilya Boytsov.

This post is licensed under CC BY 4.0 by the author.