AI models such as deep neural networks are employed to make highly consequential decisions in our daily lives. However, these complex models, purely trained from massive amounts of data, are often treated as a black-box - lacking interpretations for their internal mechanisms as well as explanation for their output predictions. Bolei's work focuses on opening the black-box;revealing what’s inside the networks trained-for-image-recognition-and-image-synthesis, and the continuing effort to make AI models more transparent.
About Bolei Zhou
Bolei is an Assistant Professor with the Department of Information Engineering at the Chinese University of Hong Kong. He received his PhD degree in computer science at the Massachusetts Institute of Technology. His research is in machine perception and decision, with a focus on enabling machines to sense and reason in their environment through learning more interpretable and structural representations in relationships. He received the Facebook Fellowship, Microsoft Research Asia Fellowship, and the MIT Greater China Fellowship, and his research has been featured in media outlets such as TechCrunch, Quartz, and MIT News.