The Bodyguard Bamboo API returns data about moderated content via GraphQL queries, using fields such as severity (high, medium, etc.), type (hateful, supportive) recommendedAction (keep, remove), a list of classifications (hatred, insult, supportive, etc.) and more. Bodyguard provides online moderation services to encourage positive engagement & protect online businesses from online toxic content. The service detects emojis, typos or misspelled words, analizes potentially toxic content, and returns a result based on moderation rules for content severity, type, message recipient, etc.
Five APIs have been added to the ProgrammableWeb directory in categories including Monitoring, Sustainability, and Languages. Featured is an API that can detects negative content and analyzes it, and an API that returns data about languages based on a user's locale. Here's a look at what's new.