Semantic Segmentation of 3D Facial Models Using 2D Annotations

Abstract

Semantic information retrieved from the human face can improve human-machine interaction, add new level of information compression and expand the multi-modality in data analysis. Extracting such information is done using semantic segmentation of images of the human face. It consists of automatically identifying the areas of human facial image, defining the different face parts, that meaningful information for humans. These areas include nose, eyes, forehead, ears, etc. In this work we propose a new algorithm based on auto-encoder architecture for semantic segmentation of 3D models of the human face. These models are represented as mesh objects which further motivates us to use graph-convolutional neural networks for the implementation of the auto-encoder. Since no data of 3D face models with annotated facial parts is available, we approach the problem using publicly available 2D annotated data and analysis-by-synthesis approach. Experimental results validate our approach for 3D face semantic segmentation.

Authors

  • Krasimir Tonchev
  • Radostina Petkova
  • Nikolay Neshov
  • Agata Manolova

Venue

2021 11th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS)

Links

https://ieeexplore.ieee.org/document/9660844

Categories

, , , , ,