MENU

Fun & Interesting

A Tutorial on Causal Representation Learning | Jason Hartford & Dhanya Sridhar

Valence Labs 2,872 2 years ago
Video Not Working? Fix It Now

Join the AI for drug discovery community: https://portal.valencelabs.com/ Tutorial Overview: Causal Representation Learning (CRL) is an emerging area of research that seeks to address an important gap in the field of causality: how can we learn causal models and mechanisms without direct measurements of all the variables? To this end, CRL combines recent advances in machine learning with new assumptions that guarantee that causal variables can be identified up to some indeterminacies from low-level observations such as text, images or biological measurements. In this tutorial, we will review the broad classes of assumptions driving CRL. We strive to build strong intuitions about the core technical problems underpinning CRL and draw connections across different results. We will conclude the tutorial by discussing open questions for CRL, motivated by the kind of methods we would need if we wanted to extend causal models to scientific discovery. Connect with the speakers: Jason Hartford - https://portal.valencelabs.com/member/ldBFuy9cJe Dhanya Sridhar - https://portal.valencelabs.com/member/xtFl2ffqyU Timestamps: 00:00 - Intro 01:22 - How we got here 10:23 - What would it take to build an AI bench scientist 12:56 - The setup 25:20 - The challenge of nonlinearity 30:55 - No causal representations without assumptions 32:58 - Time contrastive learning 47:21 - Switchover: Dhanya Sridhar 51:50 - What other learning signals can we use? 55:30 - Tree-based regularization 1:02:04 - Sparse mechanisms 1:09:51 - Multiple views and sparsity 1:18:51 - Concluding questions

Comment