We live in an era of information deluge with tons of data accumulated from everyday life. Even the smartest human beings can find themselves overwhelmed by all the highly dynamic and massive data streams they have to keep up with. Artificial intelligence can help us process data streams more efficiently, but it is hard to let them make autonomous decisions.
Furong Huang, an assistant professor at the University of Maryland, is committed to developing trustworthy artificial intelligence and machine learning (AI/ML) models to augment or amplify intelligence in the service of safe and efficient decision-making in all sorts of everyday situations. The models she develops can learn patterns and knowledge from collected big data, such as historical driving records, for autonomous decision-making in a changed setting, such as driving at an unexplored location under severe weather.
As part of the fundamental research, Furong has made fundamental contributions to non-convex optimization — the core tool used in deep learning, a prevalent AI/ML model. She and her collaborators are the first to prove that first-order gradient information, which is efficient to compute, guarantees convergence to optimal solutions for non-convex optimization. This pioneering work on non-convex optimization lays the theoretical foundations for optimization in deep learning and sparks a surge of subsequent works.
Also, Furong’s work on Transfer Learning expedites autonomous decision-making systems in dynamically changing environments with a theoretically guaranteed improvement of state-of-the-art effectiveness and efficiency. Her work provides the first method to allow knowledge transfer across drastically different observation spaces, such as from a patrol robot with GPS sensors to a patrol robot with cameras. This provides a highly practical method for efficient planning and long-term autonomous decision-making.
In the future, rather than designing ad-hoc mechanisms that mitigate uncovered security concerns and privacy issues, Furong hopes to advance spectral methods to design novel deep neural network architectures that guarantee interoperability, fairness, privacy, and robustness even before the start of the training process.