In the real world, perception and inference are deeply intertwined. Take autonomous driving for example: a self-driving car cannot determine whether a pedestrian is about to cross the street without first perceiving the pedestrian’s posture and the traffic light status. Yet with artificial intelligence, perception and inference are often treated as two separate tasks; deep learning excels in perception, while probabilistic graphical models are typically used for inference.
To bridge this gap, Hao Wang proposed the HBDL framework in his doctoral thesis, integrating deep learning-based perception with probabilistic-graphical-model-based inference. Building on and expanding this framework, Hao has focused on two major directions: enhancing the interpretability and controllability of large language and foundation models, and applying AI in healthcare.
He first introduced a causal explainer for graph neural networks (GNNs), which stands out for its universality, minimal assumptions, and verifiability. It effectively identifies the causal semantics required for generating explanations, significantly outperforming existing approaches. He later developed a counterfactual explainer for time-series prediction models, enabling interpretation of any time-series model through a counterfactual lens.
In addition, Hao proposed a concept-level explainer for multimodal large language and foundation models, capable of interpreting model predictions and reasoning through human-understandable concepts. He has also applied the HBDL framework to healthcare, incorporating interpretability into health monitoring systems, improving medication adherence, and enabling early detection of diseases such as Parkinson’s.
These research achievements have impacted areas such as biomedicine, recommender systems, and weather forecasting, and have been adopted by industry leaders including Microsoft, Amazon, and the MIT startup Emerald Innovation, benefiting millions of users to date.