My Research
Keywords: Deep Learning, Machine Learning, Math Understanding in Neural Networks, Neuro-Symbolic AI, Explainable AI (XAI), Trustworthy AI, Formal Methods
My research centers on investigating the disparity between the capacity to learn and the ability to express for neural network models such as RNNs and Transformers. Specifically, I concentrate on tasks rooted in Chomsky's Hierarchy, which provide a framework for employing formal methods to gauge the limits of learnability. Additionally, I explore the robustness of encoding and extracting rules as finite state machines from neural networks.
Recent Activities
[June 4, 2024] Investigating Symbolic Capabilities of Large Language Models accepted for oral presentation at LNSAI-IJCAI 2024
[May 23, 2024] Paper on Investigating Symbolic Capabilities of Large Language Models is out on arXiv
[May 11, 2024] Presenting poster on Stability Analysis of Various Symbolic Rule Extraction Methods from Recurrent Neural Network at the Workshop on BGPT, ICLR 2024
[April 5, 2024] Giving a talk on Building Foundations of Neurosymbolic and Verifiable AI: A stability and precision perspective at AI+X seminar, University of South Florida
[April 2, 2024] Presenting poster on Investigation into Math Solving Abilities of LLM based AI Systems at AI Week, Penn State.
Contact
College of Information Sciences and Technology
Pennsylvania State University
E345 Westgate Building, University Park, PA 16802
Email: nud83 [at] psu [dot] edu