News & Milestones

Workshop Abstract Accepted — AMIA 2025 NLP Workshop

Atlanta, GA • September 12, 2025

Excited to present our work on biomedical knowledge graph applications at the AMIA Natural Language Processing Workshop. This research explores novel approaches to clinical NLP using large language models and structured medical knowledge representations.

Postdoctoral Researcher Position — CU Anschutz Medical Campus

July 2025 – Present

Joined the Department of Biomedical Informatics working with Dr. Yanjun Gao. Research focuses on large-scale biomedical knowledge graphs, clinical natural language processing, and LLM reasoning for healthcare applications. Developing interpretable AI systems for clinical decision support and medical knowledge extraction.

Ph.D. in Computer Science — Completed

Utah State University • December 2024

Successfully defended dissertation on anomaly detection, interpretable machine learning, and adversarial robustness. Conducted extensive research on backdoor attack detection and published multiple papers in top-tier conferences including ECML-PKDD, PAKDD, IJCNN, and IEEE Big Data. Grateful for the mentorship and collaboration throughout this journey.

Paper Accepted — ECML-PKDD 2024

August 22, 2024

Our paper on interpretable anomaly detection was accepted to the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2024). This work presents a novel framework for understanding and explaining anomalous patterns in complex datasets with applications to cybersecurity and fraud detection.

Paper Accepted — PAKDD 2024

April 25, 2024

Research on robustness against backdoor attacks accepted to the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD 2024). The paper introduces innovative defense mechanisms for detecting and mitigating backdoor vulnerabilities in machine learning models, with implications for secure AI deployment.

Ph.D. Proposal Defense — Passed

Utah State University • March 2024

Successfully defended doctoral research proposal focusing on trustworthy machine learning systems. The proposal outlined comprehensive research directions in anomaly detection, model interpretability, and adversarial robustness with both theoretical foundations and practical applications across multiple domains.

Paper Accepted — IJCNN 2023

June 18, 2023

Presented research at the International Joint Conference on Neural Networks (IJCNN 2023). This work explored deep learning approaches for anomaly detection in streaming data environments, introducing efficient online learning algorithms with theoretical convergence guarantees.

Paper Accepted — IEEE Big Data 2022

December 17, 2022

Published findings on scalable machine learning for large-scale datasets at IEEE Big Data Conference 2022. The research addressed computational challenges in processing massive data streams while maintaining model accuracy and interpretability.

Paper Accepted — IEEE Big Data 2021

December 15, 2021

First major conference publication exploring neural network robustness and security vulnerabilities. This foundational work established research directions that would continue throughout the doctoral program, focusing on trustworthy AI systems.