Human Parsing With Part-Aware Relation Modeling | IEEE Journals & Magazine | IEEE Xplore

Human Parsing With Part-Aware Relation Modeling


Abstract:

In this paper, a Part-aware Relation Modeling (PRM) is developed to handle the task of human parsing. For pixel-level recognition, it is essential to generate features wi...Show More

Abstract:

In this paper, a Part-aware Relation Modeling (PRM) is developed to handle the task of human parsing. For pixel-level recognition, it is essential to generate features with adaptive context for various sizes and shapes of human parts. To address the issue, we adaptively capture contexts based on the part-aware relation mechanism. PRM mainly consists of three modules, including a part class module, a part-relation aggregation module, and a part-relation dispersion module. The part class module selectively enhances spatial details of the high-level features to obtain enhanced original features, and then extracts the high-level representations of every human part from a categorical perspective. The part-relation aggregation module is developed to extract the representative global context by exploring associated semantics of human parts, adaptively augmenting the context for human parts. The part-relation dispersion module is designed to generate the discriminative and effective local context and neglect the distracting one by making the affinity of human parts disperse. It ensures that features of the same class will be close to each other and away from those of different classes. By fusing the outputs of the two part-relation modules and the first outputs of the part class module, our PRM produces adaptive contextual features for diverse sizes of human parts, boosting the parsing accuracy. Extensive experiments are conducted to validate the effectiveness of our network, and a new state-of-the-art segmentation performance is achieved on three challenging human parsing datasets, i.e., PASCAL-Person-Part, LIP, and CIHP. PRM is also extended to other tasks like animal parsing, and exhibits its generality.
Published in: IEEE Transactions on Multimedia ( Volume: 25)
Page(s): 2601 - 2612
Date of Publication: 07 February 2022

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.