Hi there, I'm Jacques - aka JayThibs ๐
- ๐ Here's my resume and my LinkedIn profile. For my AI alignment takes, here's my LessWrong profile.
- Collaborating on the Supervising AIs Improving AIs agenda (making automated AI science safe and controllable). The current project involves a new method allowing unsupervised model behaviour evaluations. Our agenda.
- ๐ฑ Accelerating Alignment: augmenting alignment researchers using AI systems. A relevant talk I gave. Relevant survey post. If you are think AI Safety is important and you have a strong application development background, please reach out to me to collaborate!
- Quantum Computing, Photonics, and Energy Bottlenecks for AGI
- AI Insights #1: How Misalignment Could Lead to Takeover & Necessary Safety Properties
- What I've Been Doing Lately (early 2024) and Call for Collaborators
โก๏ธ more blog posts...