Skip navigation
Please use this identifier to cite or link to this item: https://libeldoc.bsuir.by/handle/123456789/54463
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZiRui Shen-
dc.contributor.authorXin Li-
dc.contributor.authorSheng Xu-
dc.coverage.spatialМинскen_US
dc.date.accessioned2024-03-01T08:13:37Z-
dc.date.available2024-03-01T08:13:37Z-
dc.date.issued2023-
dc.identifier.citationZiRui Shen. RMNET: A Residual and Multi-scale Feature Fusion Network For High-resolution Image Semantic Segmentation / ZiRui Shen, Xin Li, Sheng Xu // Pattern Recognition and Information Processing (PRIP'2023) = Распознавание образов и обработка информации (2023) : Proceedings of the 16th International Conference, October 17–19, 2023, Minsk, Belarus / United Institute of Informatics Problems of the National Academy of Sciences of Belarus. – Minsk, 2023. – P. 101–106.en_US
dc.identifier.urihttps://libeldoc.bsuir.by/handle/123456789/54463-
dc.description.abstractHigh-resolution remote sensing images have high clarity and provide signifi cant support for urban planning, resource management, environmental monitoring, and disaster warning. Semantic segmentation accurately helps extract the boundaries of objects, thereby increasing the application value of scene understanding. Traditional encoder-decoder architec- ture networks lack multi-scale information fusion and fail to capture precise multi-scale semantic information, when segmenting targets at diff erent scales. Additionally, these semantic segmentation networks have inadequate handling of class-imbalanced data, resulting in unsatisfactory classifi cation results and fi nal segmentation eff ect. This paper proposesa semantic segmentation network based on residual blocks and multi-scale feature fusion. Building upon the U-Net network, we design residual modules and multi-scale feature fusion modules to extract information-rich feature maps. Then, the multi-scale feature fusion module is used to interpolate and upsample the obtained feature maps, which are then concate- nated with feature maps at the same layer, resulting in a novel fusion feature map. In experiments, the performance of the proposed model surpasses U-Net with improvements reaching 6.06% for MIoU. The introduced network identifi es complex land features including dense distribution of objects, small objects, large diff erences in object characteristics and complex background eff ectively preserves and restores feature information by incorporating the multi-scale feature fusion module, achieving higher precision segmentation results and providing rich multi-scale and spatial information.en_US
dc.language.isoenen_US
dc.publisherBSUen_US
dc.subjectматериалы конференцийen_US
dc.subjectdeep learningen_US
dc.subjecthigh-resolutionen_US
dc.subjectsemantic segmentationen_US
dc.titleRMNET: A Residual and Multi-scale Feature Fusion Network For High-resolution Image Semantic Segmentationen_US
dc.typeArticleen_US
Appears in Collections:Pattern Recognition and Information Processing (PRIP'2023) = Распознавание образов и обработка информации (2023)

Files in This Item:
File Description SizeFormat 
ZiRui_Shen_RMNET.pdf1.4 MBAdobe PDFView/Open
Show simple item record Google Scholar

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.