Computer vision for facial analysis using human–computer interaction models
Currently, a facial analysis system for human–computer interfaces is presented and employed extensively. The increasing computer and digital speed, accuracy, and low-cost webcams that users can use bring computer vision systems more and more popular. These systems can assist people with alternative, hand-free communication to computers with human–computer interfaces. Hence, in this paper, computer vision-based face analysis model (CVFAM) has been suggested for human–computer interaction. This paper can determine the mouth and eyes position and use the facial centre to estimate the head's pose. In the suggested model, the face location is extracted from the central image by a cascade and skin detector and then is sent to the recognition phase. In the recognition phase, the threshold condition is examined, and the extracted face and gaze will be predictable. To learn about computer systems that automatically analyzes images, this paper CVFAM mode has suggested recognizing the user's face sitting in front of the system and recognizing user hand gestures and facial expressions, therefore providing interfaces for users HCI. The simulation results show that the suggested CVFAM model enhances the accuracy ratio of 93.5%, the detection rate of 94.2, the location analysis ratio of 94.5%, the recognition rate of 83.2, and the average delay ratio of 22.1% to other existing models.
International Journal of Speech Technology
First Page Number
Last Page Number
Liao, Zitian; Samuel, R. Dinesh Jackson; and Krishnamoorthy, Sujatha, "Computer vision for facial analysis using human–computer interaction models" (2022). Kean Publications. 602.