Recent advances in natural language processing (NLP) have fundamentally changed how we interact with our computer devices. Nonetheless, current NLP systems are still far from understanding human language, in all of its complexity. Human beings are remarkably adept at understanding text that covers new topics and novel situations even when the language is ambiguous and implicit. In contrast, existing NLP systems often fail basic tests of generalization and robustness.
Prof. Xiang Ren, an Associate Professor at the University of Southern California, seeks to build generalizable NLP systems—i.e., systems that can handle a wide variety of language tasks and situations—by broadening the scope of model generality while guiding them to learn desirable inductive biases and arming them with commonsense knowledge. To this end, he developed models and learning algorithms, designed new evaluation protocols, and built datasets.
Ren created evaluation methods and datasets that expose the state-of-the-art NLP systems on a variety of model generalization scenarios; developed regularization techniques, model architectures and learning objectives to incorporate inductive biases, and developed knowledge augmentation methods by leveraging external resources and novel learning algorithms to arm models with common sense.
He has been recognized by multiple industry awards such as the 2023 ACL Outstanding Paper Award, the 2022 NAACL Outstanding Paper Award, the ACM SIGKDD Doctoral Dissertation Award, and the Best Paper Award Runner-up at The Web Conference (WWW).
Ren’s future research goals will continue to focus on the development of general models and algorithms in AI to achieve easy, natural, and trustworthy interaction with machines, making them suitable for a variety of tasks and situations.