Acta Geodaetica et Cartographica Sinica ›› 2026, Vol. 55 ›› Issue (1): 124-137.doi: 10.11947/j.AGCS.2026.20250262

• Cartography and Geoinformation • Previous Articles    

A Transformer model for building polygon simplification in map generalization

Pengcheng LIU1,2(), Xiaoqiang CHENG3, Tianyuan XIAO4, Min YANG4, Tinghua AI4   

  1. 1.Key Laboratory for Geographical Process Analysis & Simulation of Hubei Province, Central China Normal University, Wuhan 430079, China
    2.School of Urban and Environmental Sciences, Central China Normal University, Wuhan 430079, China
    3.Faculty of Resources and Environmental Science, Hubei University, Wuhan 430062, China
    4.School of Resources and Environmental Sciences, Wuhan University, Wuhan 430079, China
  • Received:2025-06-30 Revised:2026-01-05 Published:2026-02-13
  • About author:LIU Pengcheng (1968—), male, PhD, professor, majors in map generalization, spatial pattern recognition and GeoAI. E-mail: liupc@ccnu.edu.cn
  • Supported by:
    The National Natural Science Foundation of China(42471486; 42071455);The Fundamental Research Funds for the Central Universities(CCNU25JC043)

Abstract:

Addressing the issues in building polygon simplification within map generalization—such as reliance on manual rules, low automation, and difficulty in reusing existing simplification results—this paper proposes a building polygon simplification model based on the Transformer mechanism. The model begins by mapping building polygons into a grid space of a certain scale, representing coordinate strings of the polygons as grid sequences. This process allows the acquisition of Token sequences before and after simplification, thereby constructing paired datasets of building polygon simplification samples. Utilizing the Transformer architecture, the model learns dependencies between point sequences through its masked self-attention mechanism, ultimately generating new simplified polygons point by point to achieve building polygon simplification. During training, the model employs structured sample data and incorporates a cross-entropy loss function that ignores specific indices to enhance simplification quality. The experimental design consists of two parts: a main experiment and a generalization validation. The main experiment, based on the Los Angeles 1∶2000 building dataset, encodes polygons using three grid sizes—0.2, 0.3, and 0.5 mm—to achieve simplifications at target scales of 1∶5000 and 1∶10 000. Results indicate that the model performs optimally with a grid size of 0.3 mm, achieving a consistency rate exceeding 92.0% with manual annotations on the validation set. A generalization experiment on building polygon data from parts of Beijing further verified the model's transferability. Comparative analysis with an LSTM model, under similar parameter scales, showed that the LSTM model failed to converge effectively and could not produce usable results. This study confirms the potential of Transformer in handling spatial geometric sequence tasks and demonstrates its ability to effectively reuse existing simplification samples. The proposed approach offers a new, engineering-practical pathway for intelligent building polygon simplification.

Key words: map generalization, buildings polygon simplification, Tokenization, Transformer model, context engineering

CLC Number: