Should Freedom of Speech Be Entrusted to Artificial Intelligence?

In all my life, I cannot recall when the norms and laws regarding our constitutional rights to freedoms of speech and press have been so tested. In my opinion, it's quite normal for the boundries of these freedoms to be constantly tested and for the government to push back. But the new normal seems egregiously chilling to me. What if anything does this have to do with APIs? Read on. 

The debacle in Charlottesville and the ensuing conversation has established a frightening grey area between the freedom to speak and the freedom to commit violence (not that there is such a thing). In Illinois, the state Senate approved a resolution asking police to recognize neo-Nazi groups as terrorist organizations. I have no idea what the implications are, but, like other terrorist organizations, I'm guessing it defines neo-Nazis as a special class to whom the typical constituational rights are not extended.

As much as I agree with the intent, the question of constitutionality is certainly raised. If neo-Nazis are assigned to a special class, then what prevents the administration from doing the same to the newly defined "alt-left?" Well, you don't have to look far for the answer. Echoing the round-em-up days of McCarthyism, another of this week's shockers has to do with how the Trump administration is demanding data on over a million visitors to an anti-Trump site. Never mind the freedom-of-speech laws that clearly permit such a site to exist. This latest assault appears to equate something as benign as a mouse click with dissent. If thou shalt not click for fear of reprisal, thou shalt certainly not open your mouth. It's the chilling effect personified.

We are surrounded by slippery slopes that are testing our sensibilities as humans and as Americans.

I was reminded of this today as I read an article by Alex Johnson over at Search Engine Journal about Google's artificial intelligence-powered Perspective API. The API which comes across to me as a very narrow slice of sentiment analyis (for which there are over 100 APIs) ranks the toxicity of a phrase. The API's home page (which is surprisingly void of any Google branding) very prominently and rhetorically asks "What if technology could help improve conversations online?" In describing the API's functionality, the page goes on to say:

Perspective is an API that makes it easier to host better conversations. The API uses machine learning models to score the perceived impact a comment might have on a conversation. Developers and publishers can use this score to give realtime feedback to commenters or help moderators do their job, or allow readers to more easily find relevant information...We’ll be releasing more machine learning models later in the year, but our first model identifies whether a comment could be perceived as “toxic" to a discussion.

It's not clear to me what makes for an "improved" or "better" conversation but I worry about the quality of discourse after a machine plays a role in weeding-out toxicity. 

Going back to the article, Johnson wrote "Perspective is built on top of Google’s AI, so it is reasonable to suggest that this system will continue to grow and learn, but it sets a worrying precedent." Because Johnson works for Search Engine Journal, a website that concerns itself with the science of improving the discoverability of your content through various search engine optimization techniques, he is essentially wondering what it means for websites if Google uses the Perspectives API to make decisions about how it ranks content within its search engine.

Johnson goes on to ask "Could we be on the precipice of a world in which journalists have to scale back the truths in their work to satisfy Google?" For that matter, could we be on the precipice of a world in which anybody -- you, me, Grandma, or whoever -- has to scale back the truths in their posts to satisfy Google or some other party with the power to throttle the public conversation on the basis of an algorithm (any algorithm, not just Google's)? Again, the slope is slippery. Based on tests that he conducted, his concern isn't entirely unfounded.

Johnson threw three nearly identical phrases at the API, only changing the subject of the phrase; a person's name. The result was disturbing.

First, if these test results are being accurately reported, I feel badly for all the Martys in the world. More importantly, as can be seen, the outcome is badly flawed and it raises the question of how a technology that's marketed as a means for detecting toxicity could be successfully used in any decision-making process (fully or partially automated). We have to keep in mind that such an API can be positioned anywhere in any kind of "censorship workflow" and the bigger question has to do with what exactly is done with the toxicity rating after it has been calculated? Does a machine automate the final outcome? Are humans involved and if so, who and where do their tolerances lie in a spectrum?

The events of this week serve as another example of the fine line in such decision making (be it by machine or human). Shortly after Heather Hey was murdered during the Charlottesville counter-protests, domain registrar GoDaddy "booted the neo-Nazi Daily Stormer website for inciting violence." The move came soon after the site published a scathing and repulsive personal attack against Hey that went viral across social media. In an interview with CNBC, GoDaddy CEO Blake Irving said "We always have to ride the fence on making sure we are protecting a free and open internet. And regardless of whether speech is hateful, bigoted, racist, ignorant, tasteless, in many cases we will still keep that content up because we don't want to be a censor and First Amendment rights matter not just in speech but on the internet as well." Irving went on to say ""But when the line gets crossed. And that speech starts to incite violence then we have a responsibility to take that down.......It's a very fine line between making sure we're not being a censor and making sure we're acting in a responsible manner." 

The interesting aspect for the purposes of this discussion isn't whether GoDaddy made the right or wrong decision. It's the subjectivity that entered the process and the degree to which the personal values of one organization might clearly differ from those in another which in turn, when paired up with something like a flawed toxicity rating, could yield an entirely different outcome from one website to the next. Blake himself illustrates how fine the line is when he characterizes the process as involving a fence that GoDaddy could fall on either side of.

While the GoDaddy decision did not involve a machine generated toxicity rating, the episode is still relevant to decisions that do (or will). For example, even if a human is ultimately involved in any censorship decision, the outcome of Johnson's simple test reveals that the problem cuts both ways. Regardless of where the human decision maker lands in the subjectivity spectrum (are they more or less sensitive to the many shades of bigotry?), does the final outcome get its fair day in court when the underlying technology is potentially flawed. And then of course, there's the situation where the decision is left solely up to the machine. 

Finally, as I said earlier, a toxicity rating is really just a much narrower version of the wider AI-powered sentiment analysis APIs of which there are many. Although Google's Perspective API is being singled out for the purposes of this analysis, there are many competing APIs (all of which will likely yield different results if applied to this narrow use case). So, let's not wrongly zero-in on Google as though there are no other offerings. Furthermore, sentiment analysis APIs can be tuned to just about any sentiment or sensitivity. For example, maybe, depending on who's in charge, it could be more senstive to the supposed alt-left than the alt-right. 

So, if you're concerned about one of those science fiction-like doomsday scenarios where mankind will one day be in an all out war with the artificial intelligence-powered machines it built, you might want to consider the near term threat that AI poses to our democracy right now. 

David Berlind is the editor-in-chief of ProgrammableWeb.com. You can reach him at david.berlind@programmableweb.com. Connect to David on Twitter at @dberlind or on LinkedIn, put him in a Google+ circle, or friend him on Facebook.
 

Comments

Comments(2)

Vincent-Lowe

...I'm sorry, but the "test" run against the algorithm is comically superficial and fully insufficient to generate any meaningful conclusion. (And by the way, your conclusion could be said to be flawed. It's not Marty who is at risk here, it's Alex, against whom hate appears to be the least objectionable.)

The sample size in this "test" is far too small, and the dystopian fear that this commentary seeds is probably unfounded.

Frankly, human moderators probably generate more disfunctional censorship than a misguided AI algorithm might do. Particularly on sites like Stack Overflow where armies of meta-content nazis roam the conversation threads just looking for an innocent question or comment to squash.

Perhaps we should give this some time mature and observe the outcome instead of pronouncing it unsuitable at its inception. I'm sure Ned Ludd would disagree.

 ---v