DAISI AI Alignment 201 Course
A 5-week course facilitated by a DAISI member, introducing you to advanced topics in AI alignment research.
This is a follow-up course to our AI Alignment 101 course. The course is based on the AI Safety Fundamentals 201 course compiled by Richard Ngo, a researcher at OpenAI. We have adapted the original 7-week format of the course to a 5-week format that is more compatible with the quarter system of TU Delft.
Each week consists of core readings and further readings. Core readings are the minimum that should be read prior to your group session, in order to ensure that everyone is on the same page. Further readings delve deeper into the topics of the week. They are encouraged but not mandatory.
Compared to the Alignment 101 course, this course goes deeper into the most important topics in AI safety: there will be a bigger focus on reading papers and engaging with complex arguments. You will also have more freedom to read about the topics you’re most interested in. For example, during week 2 and week 3, each group member can choose between multiple different topics and will then make a short presentation to the rest of the group about their topic.