Resources · Summer School
Summer School Materials
Lecture slides, Jupyter notebooks, and workshop materials from the Oxford LLMs summer schools. All openly available.
This collection brings together lecture slides, workshop materials, and practical guides from past iterations of the Oxford LLMs summer school for social sciences. The content spans foundational concepts to advanced applications—from transformer architectures and model interpretability to fine-tuning techniques and LLM agent systems.
2024
Building on the 2023 program, the 2024 materials feature fundamentals of LLMs, applied tutorials, and recent advancements. The program covers model architecture and evaluation, agent-based systems, advanced fine-tuning techniques, and practical implementations such as retrieval‑augmented generation and LLM observability.
Lectures
- Introduction to LLMs — Grigory Sapunov
- LLM Evaluation — Tatiana Shavrina
- LLM Agents — Tatiana Shavrina
- LLM Agents (continued) — Grigory Sapunov
- Fine-Tuning and Alignment — Ilya Boytsov
Seminars
- Navigating RAG for Social Science — Atita Arora
- LLM Observability and Evaluation — John Githuly
- Tutorial: Creating Llama‑Researcher using LlamaIndex workflows — John Githuly
- Gemini Model Deep Dive — Christian Silva
- ORI's Guide to Self Hosting LLMs — Ciera Fowler
2023
The 2023 program covers fundamental and applied aspects of transformer‑based language models, from the original “Attention is All You Need” paper to contemporary systems like ChatGPT. Lectures by Elena Voita focus on bias, interpretability, and alignment, while hands‑on seminars by Ilya Boytsov introduce fine‑tuning, parameter‑efficient methods, and classic NLP workflows.
Lectures
All lecture materials below were created by Elena Voita.
Seminars
Workshop materials below were designed and implemented by Ilya Boytsov.
- Google Colab environment setup, general intro
- Introduction to Hugging Face transformers
- Topic modelling with transformers using BERTopic
- Fine-tuning a pretrained model for classification
- Parameter Efficient Fine Tuning (PEFT)
- Transformers interpretability, attention visualisation, and saliency methods
- Model analysis with classic NLP using spaCy
- Prompts and instructions with Llama 2
- Detoxifying summarisation models with RLHF
GitHub Archive
All materials are also collected in the oxford-llms-workshop GitHub repository .