Mitigating Health Risks and Ensuring Safe Video Streaming Environments through Automated Video Content Moderation

Publications

Mitigating Health Risks and Ensuring Safe Video Streaming Environments through Automated Video Content Moderation

Year : 2023

Publisher : Institute of Electrical and Electronics Engineers Inc.

Source Title : iQ-CCHESS 2023 - 2023 IEEE International Conference on Quantum Technologies, Communications, Computing, Hardware and Embedded Systems Security

Document Type :

Abstract

Automated content moderation systems have become essential for maintaining a safe and healthy online environment with the ever-increasing amount of video content on the internet and social media platforms. Individuals are likely to see videos containing flashing lights that can trigger seizures, nudity, or videos that may evoke extreme emotions like anxiety, frustration, or despair. In this paper, we propose a comprehensive content moderation solution that incorporates several features. The system features epileptic seizure recognition, emotional dysregulation prevention, and kid-safe mode. Epilepsy is a chronic neurological disorder affecting millions worldwide. Various factors can trigger seizures, including flashing lights, which are prevalent in video content. The system analyzes videos for trigger segments and skips them to non-trigger segments, mitigating the risk of seizures in individuals with epilepsy. The proposed algorithm uses a sophisticated analysis of the differences in luminosity between each video frame. It identifies areas that represent potential seizure-inducing segments due to their high density of big changes in luminosity. Similarly, when a video triggers strong emotions, the emotional dysregulation prevention mechanism recognizes it and alerts the user with an emotional summary of the video. The emotional dysregulation algorithm uses a deep learning model for facial emotion identification based on the VGG16 architecture. In kid-safe mode, the nudity detection algorithm uses CNN architecture, recognizes explicit content, and blocks users, especially children, from seeing it. The system displays notifications and alerts to parents to establish restrictions on the content their children can access. The proposed mechanisms prevent potential harm to users with specific vulnerabilities and provide parents with tools to ensure a safe online streaming environment for their kids.