Cognitive Computing Makes It Possible to Build Truly Amazing Apps

Cognitive computing is a relatively new area of computer science that has been growing fairly rapidly in recent years. While artificial intelligence, a field of study that has been around since the 1950s, is generally well-known, cognitive computing has only recently garnered recognition from the general public. This is largely due to IBM Watson's famous Jeopardy appearance and subsequent win of the TV trivia game show in 2011.

The increasing use of cognitive computing technology has reached a point to where last month, the first cognitive computing forum was held in San Jose, California. The two-day conference featured presentations by industry leaders from companies and institutions such as Carnegie Mellon University, Cognitive Scale, Ersatz Labs, Google, IBM and IBM Research, MapR and Numenta.

In June, Silicon Valley-based startup AppOrchid Inc. unveiled disruptive new technology, described by the company as the "first cognitive computing app builder for the Internet of Everything market." The AppOrchid platform allows enterprises to build applications that leverage cognitive computing technology. Key features of the platform include knowledge graphs and NoSQL document stores, big data data science toolbox with cognitive science libraries, pattern matching and self-learning capabilities.

What Is Cognitive Computing?

There does not seem to be a set definition of cognitive computing at this time. Deloitte University Press describes cognitive computing as having three main components: machine learning, natural language processing and advancements in the enabling infrastructure. The IBM Research website says, "Cognitive computing systems learn and interact naturally with people to extend what either humans or machine could do on their own. They help human experts make better decisions by penetrating the complexity of big data."

Back in May, IBM Research senior VP and director John E. Kelly III and IBM writer Steve Hamm discussed in depth the "new era of cognitive computing." Kelly provides a definition of cognitive computing in the discussion:

I like to sort of position cognitive computing as not just a new computing system or computing paradigm but a whole new era of computing. What's changed now, though, is the explosion of data in the world and the rate and pace of change. ... That rate and pace has outstripped our ability to reprogram these systems. And so we are entering, and must enter, I think, a third era of computing, which is as different as the second was from the first. And we've coined this term "cognitive" because it has attributes that are more like human cognition. These are not systems that are programmed; they're systems that learn. These are not systems that require data to be neatly structured in tables or relational databases. They can deal with highly unstructured data, from tweets to signals coming off sensors.

Hamm provides an additional note to that definition, stressing that "cognitive computing is not artificial intelligence. ... Cognitive computing has the modesty of not trying to replicate the human brain."

Cognitive computing and the emerging field of cognitive analytics can be applied to many use cases in a variety of fields, such as healthcare, retail, supply chain management, financial services, risk management, fraud detection and cybersecurity.

Below are a few examples of companies that have built platforms and/or applications that incorporate cognitive computing technology. These companies were chosen to show a sampling of the market and also because they provide APIs.

IBM Watson

IBM Watson

Image credit: IBM

IBM Research is arguably the leader in the field of cognitive computing, and IBM Watson is probably the world's most-well-known example of cognitive computing technology. IBM Watson is described on the company website as technology that is capable of "getting smarter." IBM Watson is able to learn from prior interactions, can understand natural language and can generate evidence-based hypotheses. In a recent cognitive computing discussion, senior VP Kelly explained that if you were to look inside of Watson, you would see some very low-level programming for machine control. After that, Watson has two main attributes: massive sets of statistical algorithms and machine learning. He also explains that the company is working on adding computer vision capabilities to Watson. He says:

Architecture inside these systems is built with a structure that not only you can add algorithms, but you can add whole new capabilities. Today Watson understands very complex natural human language, but Watson is blind. It cannot understand the image. It is being trained, and we are writing new algorithms and new programs for image recognition, feature extraction from images. So adding new capabilities or new senses to Watson where these cognitive systems is also a huge area of interest.

Late last month, IBM announced the availability of IBM Watson Discovery Advisor, a cloud service designed for accelerating scientific and industrial research. Johnson & Johnson is using the new cloud service to teach Watson to read and understand scientific papers that detail clinical trial outcomes. The New York Genome Center is using the new Watson service to support the analysis of a clinical study to advance genomic medicine. This type of cognitive computing cloud service can also be used for a variety of applications in many other industries, such as financial, law enforcement, government, security and risk management. Mike Rhodin, senior VP, IBM Watson Group, said in a press release:

We're entering an extraordinary age of data-driven discovery. Today's announcement is a natural extension of Watson's cognitive computing capability. We're empowering researchers with a powerful tool which will help increase the impact of investments organizations make in R&D, leading to significant breakthroughs.

Last November, ProgrammableWeb reported that IBM had announced the upcoming launch of the IBM Watson Developer Cloud, a cloud-hosted marketplace for Watson-powered applications. The Watson Developer Cloud as well as Watson APIs are available to only a select group of partner developers. However, the company plans on making the Watson Developer Cloud and APIs publicly available in the near future. An IBM Watson Developer site provides some information about the IBM Watson ecosystem, API documentation, a forum, videos and more.

Bottlenose

bottlenose

Sonar Solo is a very basic, free version of Bottlenose Nerve Center that was launched to promote Bottlenose technology. Bottlenose Nerve Center uses 100% of the Twitter firehose. View the live version.

Bottlenose is a company that focuses on trend intelligence from stream data sources. The Bottlenose platform uses the company's patent-pending StreamSense technology (pattern recognition), natural language processing, machine learning heuristics and other advanced technologies to build "a real-time cognitive map of every topic, much like the human brain." Back in March, Bottlenose announced the availability of real-time analysis and trend detection on words spoken on broadcast TV and radio data from the United States, the U.K. and Canada. According to the company announcement, this analysis of broadcast TV and radio data is in real time, happening "as the data is ingested, not after the fact."

In June, ProgrammableWeb reported that Bottlenose had launched Nerve Center 2.0, an enterprise-grade, real-time trend intelligence platform for streaming data. In addition, the company released new private beta Bottlenose APIs, providing programmatic access to the Bottlenose platform. Developers can use the APIs to build third-party applications that include Bottlenose trend intelligence capabilities.

Infantium

StoryToys

StoryToys, one of Infantium's partners, builds interactive book apps for iOS and Android that are educational and fun for kids. Image credit: StoryToys

Infantium, a startup based in Barcelona, Spain, was created to achieve the goal of "building the world’s most advanced computational cognitive architecture" that will help transform how children are taught and the way in which they learn. The Infantium platform utilizes neuroscience and cognitive computing to help create personalized and maximized learning experiences for each individual child.

Using the Infantium API and SDK, developers can build applications capable of analyzing a child's learning capabilities, style and motivation. The Infantium platform is powered by a "computational system based on neuroscience," which allows cognitive capabilities to be embedded in nearly any type of learning product and application. Several popular media publishers, such as DADA Co., Duckie Deck, StoryToys and DashForward Lab, have already integrated Infantium technology into their learning applications.

Numenta

Numenta Grok

Numenta Grok for managing AWS cost anomalies. Image credit: Numenta

Numenta uses machine intelligence, machine learning, cognitive computing and other advanced technologies to provide enterprise-level pattern recognition and anomaly-detection software. The company describes the core technology powering the Numenta Grok platform as "Hierarchical Temporal Memory (HTM), which is a detailed computational theory of the neocortex. At the core of HTM are time-based learning algorithms that store and recall spatial and temporal patterns." The Numenta Grok platform is capable of detecting anomalies in servers and applications; identifying rogue behavior such as unauthorized file access or trading; detection of anomalies in the movement of people, objects, or material using speed and location data; and much more.

Numenta has recently released an open source version of the platform, which is called the Numenta Platform for Intelligent Computing. NuPIC is an open source project written in Python/C++ that implements Numenta's Cortical Learning Algorithm, which has three principal properties: sparse distributed representations, temporal inference and online learning. The NuPIC API allows developers to work with the raw algorithms, string together multiple regions (including hierarchies) and utilize other platform functions.

Conclusion

It seems as though nearly all areas of computer science have been experiencing rapid growth in recent years. Machine learning in particular seems to be one of the fastest growing subfields of computer science. ProgrammableWeb has published several recent articles that feature areas of machine learning, including predictive analytics, graph technology and computer vision. Machine learning, APIs and other advanced technologies make it possible for companies to create really innovative and amazing applications. It will be exciting to see the new kinds of applications that these advanced technology companies will build in the near future.

Janet Wagner is a freelance technical writer and contributor to ProgrammableWeb covering breaking news, in-depth analysis, and product reviews. She specializes in creating well-researched, in-depth content about APIs, machine learning, deep learning, computer vision, analytics, GIS/maps, and other advanced technologies.

Comments