Google Content Safety API Takes AI Approach to Battling Child Sexual Abuse Material

Google has announced an Artificial Intelligence (AI) approach to fighting child sexual abuse material (CSAM). Google's latest effort to combat CSAM material online arrives through the Content Safety API. The API is available for free to approved NGOs and industry partners specifically working to stop the spread of CSAM on the internet.

"[W]e’re introducing the next step in this fight: cutting-edge artificial intelligence (AI) that significantly advances our existing technologies to dramatically improve how service providers, NGOs, and other technology companies review this content at scale," Google Engineering Lead, Nikola Todorovic, commented in a blog post announcement. "By using deep neural networks for image processing, we can now assist reviewers sorting through many images by prioritizing the most likely CSAM content for review."

Google reports that the AI driving the Content Safety API helps increase a CSAM review and response by up to 700%. This will help contain the spread of such content while limiting the reviewer exposure to such material. While Google has been working to combat CSAM from its beginnings, the Content Safety API is a major leap forward in its efforts.

Google, and other internet giants, have recently come under pressure from certain governments that feel such companies aren't doing enough to combat CSAM. Google's API approach to addressing the issue offers a scalable approach, and partners such as the Internet Watch Foundation and the WePROTECT Global Alliance seem pleased with the progress. To request access, click here.

Be sure to read the next Non-Profit article: ​Call for Code Global Challenge Asks Devs to Help with Disaster Relief

 

Comments (0)