Developers can Talk to AT&T's Watson API in June

Curtis Chen
Apr. 23 2012, 04:00PM EDT

Last Thursday, telecom giant AT&T announced they would be opening several of their "Watson" speech recognition APIs to outside developers in June.  While it may be a while before any killer apps hit the scene, a major player like AT&T getting into the speech-to-text API game should not be ignored.

In a press release, AT&T senior VP John Donovan described the seven different Watson APIs coming in June:

"web search, local business search, question and answer, voice mail to text, SMS, U-verse electronic programming guide [for televisions], and a [general purpose] dictation API."  A YouTube video clarifies why there will be different Watson APIs:  each one is trained to recognize the specialized vocabulary associated with a single category, so "the software will know what types of words to expect."

What's the catch?  It's difficult to predict.  This could just be a way to get more users interacting with the system in new and different ways.  Donovan explains that:

"the best way to accelerate innovation is by opening our platforms and network services to outside developers."

Watson itself isn't a new technology--AT&T's been using it internally for decades--but giving it to third-party developers is a big strategy shift.
Back in 2008, Discover Magazine profiled AT&T's nascent plans to build speech recognition into more of its offerings, starting with a demonstration of "a voice-operated television remote control...[d]esigned to work with AT&T’s Internet TV service, U-verse."  (That didn't work out, but TV guide information is one of the upcoming Watson APIs.)  Last week, AT&T unveiled a new Watson demo: the AT&T Translator, a free iPhone app which "automatically recognizes which language is being spoken and translates in real time" between English, Spanish, Japanese, Chinese, French, German and Italian.  (The iTunes Store reviews aren't very kind so far, but perhaps updates will fix the version 1.0 bugs.)

Can outside developers do better than AT&T's brain trust when it comes to applying speech recognition technology?  Wait and see.  ProgrammableWeb readers already know how useful mash-ups can be, and Watson has at least 20 years of AT&T research behind it.  Depending on how much of that mighty engine becomes available through the Watson APIs, this could really get people talking.

(Hat tip: TechCrunch)


Curtis Chen Once a software engineer in Silicon Valley; now a science fiction writer and puzzle hunt maker near Portland, Oregon. You may have seen his "Cat Feeding Robot" Ignite presentation. Curtis is not an aardvark.

Comments

Comments(1)