For the last time this academic year, the Delft AI Safety Initiative is running the AI Safety Fundamentals course. This course is designed to give you a good overview of recent progress in AI and the challenges in developing it in a safe and aligned way. Can we understand what is going on inside large language models? How can we make sure they are aligned with human values? What risks are posed by other AI technologies? You can learn about all of this and more during the course!
Together we will go through our Alignment 101 curriculum - you can already take a look at it on our website! This curriculum is inspired by a curriculum created by Richard Ngo, a researcher at OpenAI, but has been adapted by us to a five-week format that better suits the quarterly schedule of TU Delft. The course lasts for five weeks and each week consists of 2 hours of self-study and a 1.5 hour discussion with your group (4-7 people). The weekly topics are the following:
Week 1: The Present and Future of AI
Week 2: AGI Risks
Week 3: Goal Misgeneralisation and Learning from Humans
Week 4: Scalable Oversight and Model Evaluations
Week 5: Interpretability and Governance
After completing this program you will have a deep understanding of the area and will be able to apply this to help solve one of the worlds most pressing problems! You will also receive a certificate and can dive deeper into the field by joining the Alignment 201 course also offered by us.
No programming knowledge is required. In case you’re unfamiliar with machine learning, the first week will include optional content that gives you a quick overview of the most important concepts that the rest of the course builds upon.
Don’t wait! The course starts on the 13th of May, and the deadline to sign up is the 10th of May. Also check out the full curriculum here.