About me

I am a fourth-year Ph.D. student in Computer Science and Engineering at Notre Dame, where I work in the DM2 Lab advised by Prof. Meng Jiang.

My research focuses on building reliable LLM agents for real-world decision-making tasks that require multi-step reasoning over diverse sources of information. I work at the intersection of retrieval-augmented generation (RAG), tool-augmented language models (TALMs), and learning-based methods to improve agent behavior in imperfect and evolving environments.

More specifically, I study two themes: designing effective tool-use strategies and reliable task-specific tools for realistic data workflows, and improving agent frameworks with learning signals such as verifiability checks, tool outcomes, and consistency constraints. My goal is to make LLM agents more correct, robust, and efficient in dynamic settings.

Outside research, I am very much a cat person, which means I am easily distracted by cats on the internet and in real life 🐈. I am also a proud dad of two amazing cats: Mam (Fish Sauce) and Muoi Tieu (Pepper Salt).

⭐ Recent News

πŸ“„ Apr, 2026OpenTools preprint is now on arXiv.
πŸŽ‰ Mar, 2026I will join Oracle as Applied Scientist Intern this Summer working with Dr. Avi Sil. See you, Redwood City!
πŸ“„ Aug, 2025LLM Function Calling with Templates (Work done during Amazon Internship) accepted at EMNLP 2025 Main.
πŸ“„ May, 2025DYDECOMP paper accepted at ACL 2025 Main.
πŸŽ‰ Aug, 2024Happy to announce that I will join Amazon as Applied Scientist Intern starting in September! See everyone in Palo Alto soon!
πŸ“„ May, 2023Community Recommendation Using Mental Helath Discourse Paper accepted at CODI 2023 - ACL 2023
πŸŽ“ Aug, 2022Joined University of Notre Dame as a PhD student in Computer Science & Engineering under the supervision of Prof. Meng Jiang. Go Irish ☘️.
πŸŽ“ Dec, 2021Graduated with 4.0 GPA (Summa Cum Laude) with double degrees in Computer Science and Mathematics from Texas Christian University.

πŸ“ƒ Publications

Improving Large Language Models Function Calling and Interpretability via Guided-Structured Templates thumbnail

Improving Large Language Models Function Calling and Interpretability via Guided-Structured Templates

Hy Dang, Tianyi Liu, Zhuofeng Wu, Jingfeng Yang, Haoming Jiang, Tao Yang, Pei Chen, Zhengyang Wang, Helen Wang, Huasheng Li, Bing Yin, Meng Jiang Β·
EMNLP 2025
Optimizing Decomposition for Optimal Claim Verification thumbnail

Optimizing Decomposition for Optimal Claim Verification

Yining Lu, Noah Ziems, Hy Dang, Meng Jiang Β·
ACL 2025
Embedding Mental Health Discourse for Community Recommendation thumbnail

Embedding Mental Health Discourse for Community Recommendation

Hy Dang*, Bang Nguyen*, Noah Ziems, Meng Jiang Β·
CODI-ACL 2023
A Quantitative Review on Language Model Efficiency Research thumbnail

A Quantitative Review on Language Model Efficiency Research

Meng Jiang, Hy Dang, Lingbo Tong Β·
Preprint

πŸ“§ Contact

I’m best reached via email. I’m always open to interesting conversations and collaboration.

  • Email: hdang [at] nd [dot] edu
  • Office: 355 Fitzpatrick Hall of Engineering
  • Location: University of Notre Dame, Notre Dame, IN 46565