Oxford LLMs Materials
2024
Lectures
Introduction to LLMs by Grigory Sapunov.
LLM Evaluation by Tatiana Shavrina.
LLM Agents by Tatiana Shavrina.
LLM Agents (continued) by Grigory Sapunov.
Fine-Tuning and Alignment by Ilya Boytsov.
Seminars
Navigating RAG for Social Science by Atita Arora.
LLM Observability and Evaluation by John Githuly.
Tutorial: Creating LLama-Researcher using LlamaIndex workflows by John Githuly.
Gemeni Model Deep Dive by Christian Silva.
ORI’s Guide to Self Hosting LLMs by Ciera Fowler.
2023
Explore our 2023 lecture and workshop series, featuring materials created by Elena Voita and Ilya Boytsov. Start with the evolutionary journey of NLP, understand biases and interpretability in LLMs, and learn about alignment from lecture slides and in our upcoming(!) lecture videos. Our workshops cover essential topics such as setting up Google Collab, using Huggingface transformers, topic modeling with BERTopic, fine-tuning pretrained models, parameter-efficient fine-tuning, transformer interpretability, classic NLP with Spacy, prompts and instructions with Llama 2, and detoxifying summarization models with reinforcement learning. Stay tuned for the release of the video recordings!
Lectures
The following lecture materials were and created by Elena Voita.
- The Evolutionary Journey of NLP from rule-based systems to modern Transformers-based models, which are the core technology underpinning LLMs. Video coming soon!
- Bias in LLMs and a (bit of) Interpretability. Video coming soon!
- LLMs and Alignment Video coming soon!
Workshops
The following workshop materials were designed and implemented by Ilya Boytsov. We will upload the workshop recordings soon!
- Google Collab environment setup, general intro
- Introduction to Huggingface transformers library
- Topic modelling with Transformers using BERTopic library
- A guide how to fine-tune pretrained model for a classification task
- Parameter Efficient Fine Tuning (PEFT)
- Transformers interpretability, Attention visualisation, and saliency methods (e.g. Integrated gradients)
- Model analys with classic NLP using Spacy
- Prompts and instructions with Llama 2
- Detoxifying summarisation model with Reinforcement Learning from Human Feedback (RLHF)