The KiteAI API allows you to automatically mark comments as abusive that we're 85% certain and leave the rest up to human moderation with a detailed breakdown of certain user profiles this comment may offend. It can help to understand whether or not a comment is considered abuse or harassment, by telling you the sentiment of the phrase. This enables you to determine the overall mood of the conversation, or track conversation flow between messages. This service includes; Abuse detection, Topic modeling, Confidence intervals, Sentiment analysis and more. KiteAI provides a way for developers to solve online harassment issues using machine learning.