TL;DR: I am a Ph.D. student studying the interpretability in deep learning looking for industrial research positions
Take a look at my github.io to see my past publications and blog.
- Pronouns: he/him
- Twitter: @ihowrue
- Currently learning about: generative models and what makes an interpretability method useful
Recently pivoted my research towards interpretability in deep learning.
In the past, I have worked on the use of visualization of backtrack search when solving a constraint satisfaction problem. For my undergraduaate thesis, I developed an educational and research tool visualizing how Sudoku can be solved using the Constraint Satisfaction Problem framework, and how this framework is beneficial. This tool can be found at https://sudoku.unl.edu
Deep learning, Python, C++, JavaScript (and its frameworks), Optimization
Anything! I have an unconventional background for working in deep learning, and I'm interested in the overlap between my past and current work.