Doctor of Philosophy
College of Information Sciences and Technology
The Pennsylvania State University
Advisors : C. Lee Giles, Daniel Kifer, Ankur Mali
Bachelor of Engineering (Hons.)
Electrical and Electronics Engineering
BITS Pilani, K.K. Birla Goa Campus
Keywords: Deep Learning, Machine Learning, Neurosymbolic AI, Explainable AI (XAI), Trustworthy AI, Formal Methods
My dissertation research focussed on investigating the gaps in theoretical expressivity and empirical learnability of RNNs, transformers and Large Language Models. Specifically, I concentrated on tasks rooted in Chomsky's Hierarchy, which provides a well understood framework for employing formal methods to gauge the limits of learnability. Additionally, I explored the robustness of learned representations by extracting rules as finite state machines from neural networks, and comparing their stability across various grammars, network achitectures, initialization methods and training styles.
Building on my PhD research, my current research focuses on employing neurosymbolic ideas to applications like rule extraction, rule understanding, code generation and agentic architectures. I especially focus on state space models, large language and reasoning models, and world models.
[Sept 8, 2025] Poster presented on A Learnability Study of RNNs on Counter and Dyck Languages at NeSy 2025 at UC Santa Cruz.
[Nov 22, 2024] Gave a talk on Exploring the Limits: Current AI Models and Augmented Systems at NEC Labs America, Princeton
[August 26, 2024] Joined ADP Innovation Labs as Lead Data Scientist
[August 19, 2024] Defended Ph.D. on Analyzing the Stability, Learnability, and Precision of Symbolic Structures by Neural Networks.
[July 28, 2024] Gave a talk on Neurosymbolic and Verifiable AI: Stability, Learnability and Precision at Samsung Research America, Irvine
[June 4, 2024] Investigating Symbolic Capabilities of Large Language Models accepted for oral presentation at LNSAI-IJCAI 2024
[May 23, 2024] Paper on Investigating Symbolic Capabilities of Large Language Models is out on arXiv
[May 11, 2024] Presenting poster on Stability Analysis of Various Symbolic Rule Extraction Methods from Recurrent Neural Network at the Workshop on BGPT, ICLR 2024
[April 5, 2024] Giving a talk on Building Foundations of Neurosymbolic and Verifiable AI: A stability and precision perspective at AI+X seminar, University of South Florida
[April 2, 2024] Presenting poster on Investigation into Math Solving Abilities of LLM based AI Systems at AI Week, Penn State.
Email: neisargdave0 [at] gmail [dot] com