A framework for street view texture refinement of LoD2 building model using roughly posed terrestrial photos
UAV-based oblique photogrammetric 3D reconstruction is widely utilized for large-scale urban modeling, and shop signs on the reconstructed models serve as vital elements for visualization and various urban research. However, the texture quality, particularly at street level, is often influenced by low resolution and occlusion, resulting in blurred and illegible shop signs. This paper proposes a framework to refine street view shop sign textures on LoD2 building models generated by oblique photogrammetry. Using building models and roughly posed terrestrial photos as input, the framework employs ray-mesh intersection for coarse image-to-model alignment, and then applies a multi-scale point-area fused image matching strategy for precise texture refinement. While designed with a specific POS device, it can adapt to systems providing pose data of similar or even lower accuracy. Both visual and statistical results demonstrate the effectiveness of our approach in enhancing texture quality and overall visual fidelity. The dataset used in this research is available at this URL.