Can Facebook's New Augmented Reality Tools Spot a Murder On Live Video?

On day one of F8 - Facebook's annual developer conference - as the press waited in line for a security pat-down in order to gain entry into Mark Zuckerberg's keynote presentation, the single most burning question was whether he would make any reference to a recent murder where the killer broadcast the crime in real-time over Facebook Live. After all, F8 is a developer event. It hardly seemed relevant. Or was it? When the queue started, most of us thought that the killer was still at-large. By the time we gained entry though, the news that the killer was dead had already raced through line. 

Inside, as Zuckerberg took to the stage and began his keynote, he did not let the opportunity to discuss the tragedy pass. He noted the "tragic events that took place this week in Cleveland" and resolved that Facebook "We have a lot of work, and we will keep doing all we can to prevent tragedies like this from happening."

Given the events in Cleveland, Zuckerberg and Facebook are, of course, under tremendous pressure to respond. But how?  

Institutionally speaking, there's nothing Facebook can do to stop a murderer from murdering. Or a suicide from happening. Or any other type of senseless event. But the question that I think Zuckerberg was looking to address was whether Facebook can do something to prevent the live broadcast of such a brutal event over its platform. Or for videos of such events to remain available for hours after it happened as was the case in this week's incident. At least one outraged friend of mine on Facebook noted how quickly the company is to pull down content that involves a copyright infringement. I don't know what the average response time is to such violations. Whether it takes more or less time doesn't matter. It's the perception that counts.

Tragic as the event in Cleveland was, its timing with Facebook's developer conference was eerily serendipitous. What followed in Zuckerberg's keynote was a series of announcements amounting to a platform that, using standard smartphone camera technology, can not only convert a photo or video from two to three dimensions (3D), it can recognize objects like walls, floors, countertops, refrigerators, glasses, and bowls in real time with astonishing accuracy. Just as important to developers, these and other attributes are available to third-party software in machine readable form. If the platform spots a dog, a wine bottle, or a gun, the software that's developed by Facebook or any third-party developer can do something about it in code. But again, how?

For example, can such software be programmed to recognize the difference between a toy gun and a real gun? Or the discharge of a firearm for sport versus at a shooting range versus a violent crime? How can software differentiate between a brutal crime that's captured with live video by an innocent bystander (aka: a citizen reporter) or by the assailant? Or by soldiers in combat or reporters who are embedded with an infantry unit versus terrorists looking to terrorize? 

The news that Facebook has, at its disposal, the power of an artificially intelligent platform that could potentially recognize such an event as it is taking place could not have come at a better time. Even better, the company is exposing that platform with tooling that will make it possible for talented developers who aren't even employed by Facebook to attempt their own solutions. The more brainpower, the better. But, short of shutting-down Facebook's live broadcast capability, the extent of the challenge should not be underestimated. Unfortuantely, this is not a problem that's easily solved with code. But if any company has the resources to figure it out, Facebook does.

David Berlind is the editor-in-chief of ProgrammableWeb.com. You can reach him at david.berlind@programmableweb.com. Connect to David on Twitter at @dberlind or on LinkedIn, put him in a Google+ circle, or friend him on Facebook.

Comments