Excited to present our work on biomedical knowledge graph applications at the AMIA Natural Language Processing Workshop. This research explores novel approaches to clinical NLP using large language models and structured medical knowledge representations.
Joined the Department of Biomedical Informatics working with Dr. Yanjun Gao. Research focuses on large-scale biomedical knowledge graphs, clinical natural language processing, and LLM reasoning for healthcare applications. Developing interpretable AI systems for clinical decision support and medical knowledge extraction.
Successfully defended dissertation on anomaly detection, interpretable machine learning, and adversarial robustness. Conducted extensive research on backdoor attack detection and published multiple papers in top-tier conferences including ECML-PKDD, PAKDD, IJCNN, and IEEE Big Data. Grateful for the mentorship and collaboration throughout this journey.
Our paper on interpretable anomaly detection was accepted to the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2024). This work presents a novel framework for understanding and explaining anomalous patterns in complex datasets with applications to cybersecurity and fraud detection.
Research on robustness against backdoor attacks accepted to the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD 2024). The paper introduces innovative defense mechanisms for detecting and mitigating backdoor vulnerabilities in machine learning models, with implications for secure AI deployment.
Successfully defended doctoral research proposal focusing on trustworthy machine learning systems. The proposal outlined comprehensive research directions in anomaly detection, model interpretability, and adversarial robustness with both theoretical foundations and practical applications across multiple domains.
Presented research at the International Joint Conference on Neural Networks (IJCNN 2023). This work explored deep learning approaches for anomaly detection in streaming data environments, introducing efficient online learning algorithms with theoretical convergence guarantees.
Published findings on scalable machine learning for large-scale datasets at IEEE Big Data Conference 2022. The research addressed computational challenges in processing massive data streams while maintaining model accuracy and interpretability.
First major conference publication exploring neural network robustness and security vulnerabilities. This foundational work established research directions that would continue throughout the doctoral program, focusing on trustworthy AI systems.