AlphaGo, the artificial intelligence that beat the best human player at Go in 2016, needed nearly 2,000 central processing units and 300 graphics processing units to function. As a consequence, its electricity bills were $3,000 per game. Song Han has designed software and hardware that enable powerful AI programs like AlphaGo to be deployed in low-power mobile devices.
The “deep compression” technique Han invented make it possible to run in real time on a smartphone AI algorithms that can recognize objects, generate imagery, and understand human language. Facebook, among other companies, uses Han’s software design to reduce the amount of computation an AI algorithm that can recognize objects needs. This allows people to use their smartphone camera to pinpoint objects in the real world and then add digital visual effects.
In 2016, based on his innovations, Han cofounded an AI chip company in Beijing called DeePhi Tech, which Xilinx, an American semiconductor company, acquired last year.
In his new role as an assistant professor at MIT, Han is automating the design of AI algorithms. The goal is to “let any non-expert push a button and design compact neural networks,” he says, referring to the computing systems loosely modeled after the human brain that are central to how AI works.
Software developers without AI expertise, he says, would be able to use such neural networks to classify objects, improve the resolution of images, and analyze videos more efficiently.