The KiteAI API allows you to automatically mark comments as abusive that we're 85% certain and leave the rest up to human moderation with a detailed breakdown of certain user profiles this comment may offend. It can help to understand whether or not a comment is considered abuse or harassment, by telling you the sentiment of the phrase. This enables you to determine the overall mood of the conversation, or track conversation flow between messages. This service includes; Abuse detection, Topic modeling, Confidence intervals, Sentiment analysis and more. KiteAI provides a way for developers to solve online harassment issues using machine learning.
The following is a list of how-to and tutorial content that matched your search term. ProgrammableWeb's how-to content comes from two sources; full-blown tutorials that we publish ourselves and other highly relevant tutorials that we find elsewhere on the Web. This list represents on combination of both tutorial types and if you go to ProgrammableWeb's API University, you'll not only be able to find more, they are organized based on your role (API providers or developers who consumes APIs). If you know of a tutorial that would be of interest to the ProgrammableWeb community, we'd like to know about it. Be sure to check our guidelines for making contributions to ProgrammableWeb.