AI Safety Fundamentals
May
13
to Jun 14

AI Safety Fundamentals

For the last time this academic year, the Delft AI Safety Initiative is running the AI Safety Fundamentals course. This course is designed to give you a good overview of recent progress in AI and the challenges in developing it in a safe and aligned way. Can we understand what is going on inside large language models? How can we make sure they are aligned with human values? What risks are posed by other AI technologies? You can learn about all of this and more during the course!

Together we will go through our Alignment 101 curriculum - you can already take a look at it on our website! This curriculum is inspired by a curriculum created by Richard Ngo, a researcher at OpenAI, but has been adapted by us to a five-week format that better suits the quarterly schedule of TU Delft. The course lasts for five weeks and each week consists of 2 hours of self-study and a 1.5 hour discussion with your group (4-7 people). The weekly topics are the following:

Week 1: The Present and Future of AI
Week 2: AGI Risks
Week 3: Goal Misgeneralisation and Learning from Humans
Week 4: Scalable Oversight and Model Evaluations
Week 5: Interpretability and Governance

After completing this program you will have a deep understanding of the area and will be able to apply this to help solve one of the worlds most pressing problems! You will also receive a certificate and can dive deeper into the field by joining the Alignment 201 course also offered by us.

No programming knowledge is required. In case you’re unfamiliar with machine learning, the first week will include optional content that gives you a quick overview of the most important concepts that the rest of the course builds upon.

Don’t wait! The course starts on the 13th of May, and the deadline to sign up is the 10th of May. Also check out the full curriculum here.

View Event →
AGI Safety Fundamentals
Feb
28
to Mar 27

AGI Safety Fundamentals

This program dives deep into the risks posed by advanced Artificial Intelligence. We talk about current progress in AI, the problems that need to be solved to make sure AI Systems are safe and how to align AI with human values!

We cover questions such as: How can we teach AI to behave ethically? How do we make sure AI follows the intent of it’s creators? How can you test whether an AI is safe to deploy? What is the state-of-the-art in AI and how will it progress in the next years?

Together we will go through the curriculum created by AI Alignment researcher Richard Ngo: https://www.agisafetyfundamentals.com/ai-alignment-curriculum. The first seven weeks are split into 1.5h of reading about the problem and 1.5h of discussing the contents with other interested students. In the remaining 4 weeks you get to pick your own mini-project to develop your skills and knowledge in the field.

Week 1: Artificial General Intelligence

Week 2: Reward misspecification and foundation models

Week 3: Goal mis generalization and instrumental convergence

Week 4: Inverse Reinforcement Learning and Iterated Amplification

Week 5: Debate and unrestricted adversarial training

Week 6: Interpretability

Week 7: Agent foundations, AI governance, and careers in alignment

Week 8-11: You Project

After completing this program you will have a deep understanding of the area and will be able to apply this to help solve one of the worlds most pressing problems!

No programming knowledge is required, however if you are less familiar with concepts in Machine Learning you can prepare with Week 0: Introduction to Machine Learning

Don’t wait! The programme starts on the 28th of February, and the deadline to sign up is the 18th of February. Also check out the full curriculum here.

View Event →
DAISI Intro event
Feb
15

DAISI Intro event

Will AI really cause a catastrophe? Hopefully not! AI has tremendous potential for making the world a better place, especially as the technology continues to develop.

Still - we need to take the risks seriously.

In this event, we will introduce the essential arguments for how AI could cause immense harm, and then introduce how you can help be involved to make sure we get AI right. We will give you an overview of the emerging problem of aligning AI systems with human values and approaches to solve it.

When:        Thursday, February 15, 18:00 - 19:30

Location:   Pulse Hall 7

Sign up now!


View Event →
AI Safety Hackathon - Entrepreneur First x Apart Research x TU Delft x DAISI
Nov
11
to Nov 12

AI Safety Hackathon - Entrepreneur First x Apart Research x TU Delft x DAISI

Note that the deadline to sign up is October the 29th!

​Will you be one of the 40 most exceptional AI/ML engineers, researchers or students passionate about this field to take part in our next AI Safety Hackathon in the Netherlands?

​Join a curated group of ambitious and talented people from students to AI experts with a wide range of skills and backgrounds or profound interest to work on AI Safety-related topics within Large Language Models (LLM), AI Cyber Defense, Interpretability and AI Evaluation.

​With over a decade of championing AI founders, we are happy to welcome the next generation of founders in AI to join the ranks of TractableSonanticMagic Pony and many more in our portfolio.

​We’ve partnered with leading tech companies to give you access to:

You’ll have two days to build a business proposal and an MVP before pitching to an exceptional jury:

​- Noah Siegel - DeepMind’s AI safety research team

​- Daan Jujin - AI policy expert at the Dutch Ministry of Economic Affairs

​- Louis Fleury - Principal & Talent Investor at Entrepreneur First

- ​Esben Kran - Executive Director of Apart Research

The jury will be looking primarily at the quality of the MVP and its impact.​ The winning team(s) will be eligible for a 3-6 month mentorship at Apart Lab. During this period, they will offer you direct mentorship in AI safety research, guidance in academic paper writing, computational resources, and much more! The participation will be free, with food and drinks provided.

​Are you an outstanding ML/AI engineer, a PHD, a student or a recent graduate with a strong interest in building AI solutions? This hackathon is for you.

Find out more and sign up now here!


View Event →
AGI Safety Fundamentals
Oct
4
to Dec 18

AGI Safety Fundamentals

This program dives deep into the risks posed by advanced Artificial Intelligence. We talk about current progress in AI, the problems that need to be solved to make sure AI Systems are safe and how to align AI with human values!

We cover questions such as: How can we teach AI to behave ethically? How do we make sure AI follows the intent of it’s creators? How can you test whether an AI is safe to deploy? What is the state-of-the-art in AI and how will it progress in the next years?

Together we will go through the curriculum created by AI Alignment researcher Richard Ngo: https://www.agisafetyfundamentals.com/ai-alignment-curriculum. The first seven weeks are split into 1.5h of reading about the problem and 1.5h of discussing the contents with other interested students. In the remaining 4 weeks you get to pick your own mini-project to develop your skills and knowledge in the field.

Week 1: Artificial General Intelligence

Week 2: Reward misspecification and foundation models

Week 3: Goal mis generalization and instrumental convergence

Week 4: Inverse Reinforcement Learning and Iterated Amplification

Week 5: Debate and unrestricted adversarial training

Week 6: Interpretability

Week 7: Agent foundations, AI governance, and careers in alignment

Week 8-11: You Project

After completing this program you will have a deep understanding of the area and will be able to apply this to help solve one of the worlds most pressing problems!

No programming knowledge is required, however if you are less familiar with concepts in Machine Learning you can prepare with Week 0: Introduction to Machine Learning

Don’t wait! The programme starts on the 9th of October, and the deadline to sign up is the 4rd of October.

View Event →
DAISI social!
Oct
3

DAISI social!

Join the Delft AI Safety Initiative on Tuesday 3.10. for drinks in Café Het Klooster. This is a great chance to get to know the community, engage with others who are passionate about making AI systems safe and discuss the future of AI. We'll start at 20:00 and the first drink is on us! 

We're looking forward to a fun and interesting evening!

Sign up here!

View Event →
OpenAI Talk + Q&A
Sep
27

OpenAI Talk + Q&A

With AI capabilities soaring, we're at a critical juncture to discuss its ethical and existential dimensions. This is your shot to ask pressing questions directly to 3 OpenAI researchers who are at the forefront of this game-changing technology.

Are you curious about the astonishing potential of AI and the profound implications it holds for the future of humanity? Join us for a thought-provoking deep dive and exclusive Q&A with 3 OpenAI researchers on the topic of AI and existential risk!

With AI advancing faster than ever, how long do we have before it definitively surpasses our cognitive abilities? And how do we stay in control of systems smarter than ourselves? Experts are increasingly concerned that civilizational collapse and even extinction are not fringe possibilities. How do we steer away from disaster and safeguard humanity’s future?

Sign up and gain unique insights into navigating the rapidly evolving landscape of AI, and discover how you can actively shape its trajectory. Get ready to ask your questions to the very people building the future!

Program
*19:00 - 19:15 Doors open
*19:15 - 19:30 Introduction to AI safety
*19:30 - 20:00 Live talk from OpenAI researchers
*20:00 - 21:00 Live audience Q&A with all OpenAI guests
*21:00 - 21:15 Closing talk: What can we do?

Sign up and secure your spot here!


View Event →
AGI Safety Fundamentals
May
8

AGI Safety Fundamentals

This program dives deep into the risks posed by advanced Artificial Intelligence. We talk about current progress in AI, the problems that need to be solved to make sure AI Systems are safe and how to align AI with human values!

We cover questions such as: How can we teach AI to behave ethically? How do we make sure AI follows the intent of it’s creators? How can you test whether an AI is safe to deploy? What is the state-of-the-art in AI and how will it progress in the next years?

Together we will go through the curriculum created by AI Alignment researcher Richard Ngo: https://www.agisafetyfundamentals.com/ai-alignment-curriculum. The first seven weeks are split into 1.5h of reading about the problem and 1.5h of discussing the contents with other interested students. In the remaining 4 weeks you get to pick your own mini-project to develop your skills and knowledge in the field.

Week 1: Artificial General Intelligence

Week 2: Reward misspecification and foundation models

Week 3: Goal mis generalization and instrumental convergence

Week 4: Inverse Reinforcement Learning and Iterated Amplification

Week 5: Debate and unrestricted adversarial training

Week 6: Interpretability

Week 7: Agent foundations, AI governance, and careers in alignment

Week 8-11: You Project

After completing this program you will have a deep understanding of the area and will be able to apply this to help solve one of the worlds most pressing problems!

No programming knowledge is required, however if you are less familiar with concepts in Machine Learning you can prepare with Week 0: Introduction to Machine Learning

Applications are now open!

View Event →
Introduction to AI Safety Event
May
1

Introduction to AI Safety Event

AI systems are becoming more capable quickly and have increasingly large impact on society. This poses pressing questions: How do we understand what is going on inside these Neural Networks? How can we make sure they are safe? How can we align the goals of AI with our human values? We will talk about this and more in our Introduction to AI Safety

The event will give you an overview of the emerging problem of aligning AI systems with human values and approaches to solve it. We're excited to see you next Monday 1st of May at 19:00 in Pulse Hall 5 (free drinks & snacks included)!

You can sign up here

View Event →
AI Governance Challenge
Mar
24
to Mar 26

AI Governance Challenge

The rapid progress in AI presents us with some critical strategic and governance issues that address the most severe risks of this technology! How can societies handle the labour displacement from AI? How can we avoid an arms race between countries? How can we forecast major milestones in the development of AI?

You will be presented with scenarios and cases and dive into their strategic and technical aspects. This could be anything from developing ethical standards and strategic frameworks, making forecasts about the future of AI or devising automatic test suites.

📣 The topic will be introduced with a talk from a domain expert and the Alignment Jam team
🔬 You can join even without experience as we'll share some amazing starter resources
🍕 Free food is available during the whole hackathon!!!
🏆 There is a prize pool of $2000 up for grabs to the best projects along with a random participation award of $200!
🎯 Everyone will judge each others' projects internationally along with a judging panel

📅 We go from the Friday 24th March 18:00 in the evening until Sunday the 26th March 18:00 in the evening

Find out more and sign up here.

View Event →
AGI Safety Fundamentals
Feb
27

AGI Safety Fundamentals

This program dives deep into the risks posed by advanced Artificial Intelligence. We talk about current progress in AI, the problems that need to be solved to make sure AI Systems are safe and how to align AI with human values!

We cover questions such as: How can we teach AI to behave ethically? How do we make sure AI follows the intent of it’s creators? How can you test whether an AI is safe to deploy? What is the state-of-the-art in AI and how will it progress in the next years?

Together we will go through the curriculum created by AI Alignment researcher Richard Ngo: https://www.agisafetyfundamentals.com/ai-alignment-curriculum. The first seven weeks are split into 1.5h of reading about the problem and 1.5h of discussing the contents with other interested students. In the remaining 4 weeks you get to pick your own mini-project to develop your skills and knowledge in the field.

Week 1: Artificial General Intelligence

Week 2: Reward misspecification and foundation models

Week 3: Goal mis generalization and instrumental convergence

Week 4: Inverse Reinforcement Learning and Iterated Amplification

Week 5: Debate and unrestricted adversarial training

Week 6: Interpretability

Week 7: Agent foundations, AI governance, and careers in alignment

Week 8-11: You Project

After completing this program you will have a deep understanding of the area and will be able to apply this to help solve one of the worlds most pressing problems!

No programming knowledge is required, however if you are less familiar with concepts in Machine Learning you can prepare with Week 0: Introduction to Machine Learning

You can apply here until 23/02

View Event →
DAISI Introduction event
Feb
20

DAISI Introduction event

What risks does powerful AI technology pose and how can we mitigate them?

In the founding event of the Delft AI Safety Initiative we will introduce you to the emerging field of AI Alignment. You will hear a talk about the problem of aligning AI with human values and how this can be approached! Afterwards you will have time to discuss and meet with other interested students.

Free snacks & drinks provided! You can sign up here.

View Event →
Coffee Chats
Feb
20
to Mar 3

Coffee Chats

Would you like to discuss some questions regarding AI safety that are too long for a simple contact form? Sign up for a quick chat over coffee (or over anything else) with a member of the DAISI team.

View Event →