Week 5 - Open Problems and Careers in AI Safety
You will finish the course with a week that prepares you for your next step in the field of AI safety: it gives an overview of the open problems, opportunities and career paths within the field. Please note that going through the core readings of this week in full would take you a lot longer than two hours. You are not expected to do that - the readings have been deliberately chosen in such a way that many sections can be skipped while still getting full value out of the rest of the sections. You are encouraged to read the sections that either seem the most relevant to you - for example, the open problems in a subfield that has caught your attention during the course -, or sections that you expect to broaden your perspective about work in the field of AI safety.
Core readings:
This reading gives a comprehensive overview of unsolved safety problems for LLMs.
This reading gives a significantly more broad overview of unsolved problems in AI safety, treating those problems from a model-agnostic viewpoint.
(My understanding of) What Everyone in Technical Alignment is Doing and Why (Larsen, 2022)
This blog post provides a comprehensive overview of organisations and research agendas in the field of AI safety as of 2022.
This article is long, but full of action-guiding advice which might help you to narrow in on what skills you might want to build, or what sort of long-term path in technical alignment you might want to pursue.
Ngo compiles a number of resources for thinking about careers in alignment research. Use this resource to get a sense of the career types that exist in technical alignment research, and to consider which paths suit and excite you.
This article provides an in-depth review of the AI safety technical researcher career path. It is authored by 80,000 Hours, which is a nonprofit organisation that provides research and guidance to help individuals make high-impact career choices. The article discusses what the career path is like and what difficulties it involves, and also provides practical advice about topics such as how to upskill and whether to do a PhD to enter the field.
Further readings:
200 Concrete Open Problems in Mechanistic Interpretability (Nanda, 2022) (skip to the last section and follow the links that seem the most interesting to you)
Levelling up in AI safety Research Engineering (Mukobi, 2022)
A helpful guide laying out some suggested steps for gaining skills towards an eventual role as a machine learning research engineer. These are highly applicable to many roles at alignment organisations.
Resources that (I think) new alignment researchers should know about (Wasil, 2023)
Podcast spotlight:
For a discussion of careers in the field of AI safety and ways of entering the field, listen to the 80,000 Hours podcast episode with Daniel Ziegler and Catherine Olsson. Ziegler is a technical researcher at Redwood Research, a non-profit alignment research organisation, while Olsson is a research engineer in the Anthropic mechanistic interpretability team. You can also listen to the 80,000 Hours podcast episode with Jan Leike on how to become an AI alignment researcher.