Photo of Tsenjung Tai

Artificial intelligence & robotics

Tsenjung Tai

High-precision image recognition technology developed for SAR satellite imagery.

Year Honored
2022

Region
Japan

With the onset of climate change and its effects, in recent years the damage inflicted from typhoons, severe rainstorms, and other natural disasters has escalated. As the severity of these calamities rise, space satellites have been put on the spotlight as a way to quickly assess damage conditions when large-scale natural disasters strike. Among these are the "Synthetic Aperture Radar (SAR)" satellites, which are expected to play an active role in this field given their capabilities to survey the earth's surface regardless of weather conditions or time of day. Damage conditions can be assessed accurately and automatically through the use of analyzing images taken by SAR satellites. These capabilities can be further utilized to plan rescue operations, disaster relief and recovery efforts, and more.


That being said, it takes a few days or more for one satellite to observe and survey the same point on the globe. This means that images from a number of different satellites need to be analyzed to get a prompt assessment of the damage that has been done.


Tai Tsenjung, a researcher at NEC, has developed an image analyses AI that is specifically suited for SAR image data. The major feature here is that it only needs a tiny amount of training data to fully implement high-precision image recognition functionality.


Deep learning is frequently used in image recognition processes, which takes an exhaustive approach with high volumes of image data to prepare. However, SAR imagery is made up of specialized sensor data, and so the task of gathering up large amounts of this data in advance would prove to be a major obstacle. Transfer Learning is the particular method used to get around this issue, which utilizes simulated data modeled on real SAR images to select discriminative features and adapt the knowledge to perform recognition tasks on real data by referring to a small portion of them.


But, when it comes to conventional transfer learning, the knowledge to categorize objects is adapted by aligning the feature similarities of various objects between simulated images and real images, and for SAR imagery the distinctions between the same objects can drastically vary depending on the observation point. This proved to be a difficult problem in conducting accurate image analyses. Tai has conceptualized a method that takes into consideration the differences that arise from observation angles rather than just the discrepancy between simulated images and real ones, then from there learn the 3D structural information of objects from 2D imagery, and predict categorical features that are in correspondence with the observation angles. By making the features relatable to the viewing angle change, the AI can adapt the categorization knowledge accurately by considering intra-category feature variations with only a small amount of real image data. 


This methodology that Tai conceived successfully cut down the rate of errors by half when compared to conventional approaches to SAR image recognition tasks. The results of this research have been taken up by international conferences at the highest level when it comes to fields like computer vision and remote sensing.


Tai is also pushing research forward to apply SAR image analysis technology to analyze footage captured by ground-level security cameras. Automating the way that security cameras are positioned and adjusted to take footage opens up the possibility to ease and facilitate the different processes required to implement and conduct video image analysis.