In January, I took up a Sponsored Associate (read: internship) position at a research lab under the supervision of Prof Ferdinando Fioretto. The task is to look into causal notions of fairness and consider whether current methods of causal discovery are subject to structural biases. In other words, much like existing problems in ML, does data play a large part in what we learn about causality. One would think that since a causal discovery method is meant to discover causes, it should be robust to simple manipulations of the dataset. The truth is more complex. To consider this question more deeply, we first need to consider the structural nature of current discovery methods. Let’s start with a brief review.
Can we learn cause from correlation?
coming soon discuss Reichenbach’s common cause principle
How does this relate to fairness?
Talk about causality-based fairness ML review
Some thoughts and research ideas
Is RL unfair? Does is learn from protected variables? My thought would be yes. Causally fair RL -> what if we protect certain variables in the environment, can the RL agent learn fair policies by applying a causal model.
My presentation to Shocklab
embed a video recording of Shocklab seminar