In the past, machine learning research was focused on adapting machine learning algorithms to existing hardware. But in recent years, owing to a surge in popularity of a new generation of programmable hardware, designing the algorithm and hardware together is gradually gaining the industry's and academia’s attention.
This trend also attracted the attention of Tencent AI Lab researcher Ji Liu. Based on this idea, he proposed the concept of joint algorithm optimization and hardware design, and developed the first machine learning framework that supports "end-to-end" low-precision operations. His also extended his work to various important topics such as asynchronous parallel algorithms, tensor completion, decentralized optimization, etc.
He proposed a series of asynchronous parallel algorithms to solve the bottleneck of traditional synchronous parallel algorithms – fast processes waiting for slow ones. At the same time, his research also solved an open problem in asynchronous parallel deep learning algorithms: the correctness and efficiency guaranteed by asynchronous SGD algorithms. This asynchronous SGD parallel algorithm is now widely used in various mainstream machine learning software and platforms.
In addition, Ji designed a decentralized parallel computing framework for machine learning. Compared to the traditional centralized parallel computing framework, this decentralized framework can greatly reduce the communication costs.
As for the research in game AI that he is currently engaged in, Ji believes that "real-time strategy games are actually a lot harder than Go. For me, the significance of this project is to explore the boundaries of AI capabilities."
In designing game AI, enhancements to the reinforcement learning algorithm will play a crucial role. However, most reinforcement learning algorithms are biased towards heuristics, lacking theoretical support and understanding. Ji breaks this convention and translates the reinforcement learning problem into an equivalent minimax optimization problem, such that many optimization theoretical results and calculation methods can be applied to reinforcement learning faster.
Ji is still engaged in decentralized parallel computing research, which may change the design ideas behind current mainstream machine learning platforms (such as Google's TensorFlow), while also being very likely to bring forth new Internet services.