I always learn a ton about people’s interests by hearing about what projects they’re excited about. If you’re working on similar things, and want to talk about them more or learn together, I’m always excited to chat (especially if you’re doing the work for social good)!
Here are some current work and personal projects I’m particularly excited about:
Personal Projects
- Upskilling on Variational Inference: with a particular focus on techniques like normalizing flows which improve our ability to correctly quantify uncertainty 1.
- Studying Declining US Social Capital: Building a better knowledge base on declining social capital in the US, and the implications of this, especially for politics2.
- Learning more about Effective Altruism: especially longtermism, and evaluating how interested I am becoming more involved in the community3.
Work Projects
- Better Polling Methods: Improving our political survey methodology choices to improve our resulting model quality in the 2022 midterm elections4.
- Productizing HTE Estimation: Exploring which Heterogeneous Treatment Effect models perform best in our industry in a variety of contexts, with an eye towards a fully productized solution in early 20235.
- Better CI’s when Identification is Weak: Improving our uncertainty quantification when using observational methods with weaker identification strategies6.
I also want to start building out a broader list of topics that I’ve spent time with or want to do a deeper dive in. For example, last winter I spent a ton of time exploring Potential Outcome vs DAG approaches to causal inference. In grad school I was particularly focused on the effects of educational polarization in the US, etc. As a future example, I want to do a broader dive into how social media effects culture and politics. This is both a personal thing (to track what I’ve been interested in over the years), and a social one (I love sharing resources and working with others to learn these topics).
Footnotes
I started this out by working through Depth First Learning’s Variational Inference with Normalizing Flows curriculum. Next, I plan to implement a couple of MRP models with various flavors of VI, and see how it holds up to a MCMC version.↩︎
For this, I’m reading some of Putnam’s work since Bowling Alone, and a bunch of papers in the vein of this Nature paper.↩︎
Having read WWOTF, I’m now reading The Precipice and plan to read Doing Good Better. In addition, reading a bunch of the EA Forum. Since I already think about maximizing my social impact, there are a lot of appealing ideas here. I have two cruxes here: first, I’m not sure if I trust EA’s commitment to being non-political- as laid out here and here. Second, I’m not yet sure I buy the math behind the heavy focus on existential risk causes.↩︎
None of these links will get at any IP, but broadly I’m synthesizing a lot of what I learned from AAPOR 2022 (this thread is a good starting point) and examining how some decent ideas would’ve changed our recent predictions.↩︎
There are a ton of ideas to explore here, but some of the most promising are double/debiased ML estimators, Meta Learners, and other ideas explored in recent ACIC competitions.↩︎
If we’re trying to estimate a causal effect without an experiment or believable instrument, and have to rely on covariate adjustment-esque strategies, how can we quantify our uncertainty? Can we reason about plausible sizes of unobserved confounders and in so doing move the discussion past “you needed to control for U”? If we can make such claims, can we use them to then define uncertainty intervals which reflect our various levels of plausibility for such concerns?↩︎