{"id":881361,"date":"2022-09-27T18:49:15","date_gmt":"2022-09-28T01:49:15","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/"},"modified":"2022-09-27T18:49:15","modified_gmt":"2022-09-28T01:49:15","slug":"location-aware-super-resolution-for-satellite-data-fusion","status":"publish","type":"msr-research-item","link":"https:\/\/www.microsoft.com\/en-us\/research\/publication\/location-aware-super-resolution-for-satellite-data-fusion\/","title":{"rendered":"Location Aware Super-Resolution for Satellite Data Fusion"},"content":{"rendered":"
Satellite data fusion involves images with different spatial, temporal, <\/span>and spectral resolution.<\/span> These images are taken <\/span>under different illumination conditions, with different sensors <\/span>and atmospheric noise. We use classic super-resolution algo<\/span>rithms to synthesize commercial satellite images <\/span>from a public satellite source (Sentinel-2).<\/span> Each super-resolution <\/span>resolution method is then further improved by adaptive sharp<\/span>ening to the location by use of matrix completion (regression <\/span>with missing pixels). Finally, we consider ensemble systems <\/span>and a residual channel attention dual network with stochastic <\/span>dropout.<\/span> The resulting systems are visibly less blurry with <\/span>higher fidelity and yield improved performance<\/span><\/p>\n