The senior design project I have been working on in school is a "smart mirror". Equipped with a thermal and RGB camera, its primary purpose is to analyze a user's face and give health feedback. The subsystem I was responsible for was image processing and my goal was to analyze user images to obtain health metrics. In this post I go over extracting the user's skin.
If you take a picture of someone and want to do some image analysis of their skin lesions, first you have to determine which part of the image is their skin. A major requirment for this process is that it has to be more flexible than simply providing a fixed range of color values meant to include all human skin types.
The idea behind the skin extraction process here is that if you know where the face is then you have an approximation of what the rest of the skin looks like. This works well because face detection is robust and easy to implement and works for all skin colors. The process can also be improved by detecting and removing eyes and mouth before the skin color range is calculated.
This process will still include any background areas or hair that are too similar to the skin color of the user. In addition, accuracy of skin detection depends on the accuracy of face and face feature detection. For the length of this project I used OpenCV's Haar Cascade classifiers for this detection however they were not all that accurate, especially for mouth and eye detection. If I were to go back I would look at alternatives such as facial landmarks with dlib.
Github for OpenCV code, in C++.