As the fundamental solution for broad-spectrum applications in smart cities, virtual and augmented realities, robotics, smart imaging has always been attractive to both academia and industry, yet full of challenges. Dr. Lu Fang devoted herself to this frontier area and launched theoretical and technical researches.
Single sensor imaging has limited performance in acquiring multi-dimensional images containing spatial, temporal, angular, spectral, and dynamic information. Parallel sampling and processing in array cameras may enable 10-100X increases in pixel capacity. The existing array cameras comprise multiple homogeneous microcameras arranged in a highly structured way. Such a uniform pixel density is a legacy concept from single aperture cameras. Dr. Lu Fang proposed UnstructuredCam, an unstructured heterogeneous array camera that enables scalable and smart sampling. By using wide angle microcameras to capture the panoramic scene along with telephoto microcameras for local dynamic details, the UnstructuredCam follows the user's perspective of natural gigapixel scale systems while minimizing the system cost, the operating power, and the storage and processing costs. Furthermore, her team proposed Multiscale-VR, a multiscale unstructured array camera for high-quality gigapixel 3D panoramic videography, creating 6DoF multiscale interactive VR content.
With the emergence of new generations of computational imaging systems and AI technologies, bridging the gap between computational photography and computer vision has come into great demand. Dr. Lu Fang proposed a novel Gigapixel-level Human-centric Video Dataset (PANDA) to support large-scale, long-term, and multi-object visual analysis. The videos contain 4k head counts with over 100× scale variation, equipped with enriched ground-truth annotations. PANDA helps to understand the complicated behaviors and interactions of crowds in large-scale real-world scenes.
High dimensional visual data acquisition, reconstruction, and processing require more and more computational resources due to the significant increase of data size and model complexity. Combining optical computing with electronic computing can fundamentally accelerate the model iteration speed. At the intersection of computer vision and graphics, scientific computing, machine learning, optics, and electronics, Dr. Lu Fang will adopt the interdisciplinary research principle and focus on the development of cutting-edge smart imaging systems and high-performance optoelectronic computing platforms.
Although optoelectronic computing suffers from various limitations before working with smart imaging, she is full of confidence. "We look forward to the realization of a new generation of smart cameras in the near future, which can not only serve smart cities, but also build a new generation of vision radar to serve autonomous driving and unmanned systems," she said.