The 80,000 Hours Podcast (run by the nonprofit organization 80,000 Hours, which offers career advice to young people guided by the principles of effective altruism) seems generally to be an incredibly good resource for learning about some of the most important yet underappreciated issues confronting humanity today. Its host Robert Wiblin (Director of Research at 80,000 hours) does the interviews extremely competently, having typically done some serious homework on the topics that come up.
Still, the amazing recent 3 hours and 52 minutes long episode with computer scientist Paul Christiano stands out. Despite its length, I found every minute of the episode highly worthwile and informative, and it convinces me that Christiano does some of today's most important, creative and cutting-edge work on AI alignment and AI safety. Highly recommended! One of my top near-term priorities right now is to become more closely acquainted with, e.g., his work on prosaic AI alignment, on iterated distillation and amplification, on "AI safety via debate", and on the possibility of unaligned yet morally valuable AI. And as a bonus, and mostly just for pleasure, I am about to read his Eight unusual science fiction plots.