Currently:
- π Developing multimodal code-generation for extracting web data at scale @ Reworkd AI (YC S23)
- π¬ Understanding how vision-language models build syntactic representations
- ‴ Scaling up neural satisfiability solvers
- π Exploring how scaling laws scale with data complexity
- πΎ Fine-tuning LLM agents to play games with online RL
- β©οΈ Replacing backprop in autoregressive language models
Previously:
- π Graduated from Carnegie Mellon '23 with Honors in Computer Science
- π My thesis on vision-language semantics is cited by Google Brain, Meta AI, Stanford, etc.
- π Published papers at ACL, ICLR, EMNLP, & EACL conferences and NeurIPS & ICCV workshops
- π§βπ» Exited a content research micro-SaaS with some cool clustering, fact checking, & generation features
- π€ Fine-tuned language models at Microsoft AI over summer '22
- π οΈ Worked on information retrieval, question answering, & summarization at various startups '20-21
- π§ Developed brain-computer interfaces with NSF funding and placed 1st nationally at NeuroTechX '20
- π Won 10+ hackathons including 1st @ Facebook '19, 2nd @ UCLA '19, 3rd @ MIT '20
Warning: has not learnt the Bitter Lesson. Prone to getting nerd-sniped by linguistically & cognitively motivated AI research directions.