IBM has considerably expanded the functionality of its Watson ‘cognitive computing platform’ and released a set of relevant APIs to developers.

According to the company, these new APIs will help its customers build advanced text, speech and image recognition capabilities into their software and systems.

“Since introducing the Watson development platform, thousands of people have used these technologies in new and inventive ways, and many have done so without extensive experience as a coder or data scientist,” said Mike Rhodin, senior vice president of IBM Watson business unit. “We believe that by opening Watson to all, and continuously expanding what it can do, we are democratizing the power of data, and with it innovation.”

In total, more than 100 ecosystem partners have already launched their own Watson-based apps. The company also revealed it is building a new Watson hub in San Francisco, set to open in early 2016.

All of the announcements were made by IBM during its forum on cognitive computing and Artificial Intelligence.

Watson, the early years
Watson, the early years – Wikimedia Commons

Watson emerges

Just 20 months ago, Watson - originally designed to win a TV game show - had a single API that could enable the system to research a question and give an answer with supporting evidence.

Today, the platform offers more than 25 APIs that offer easy access to services based on machine learning and natural language processing.

For example, ’Tone Analyzer’ is an experimental tool that can discover, understand, and revise the language tones in text. Another tool, ‘Tradeoff Analytics’, helps users make better choices using a mathematical filtering technique called Pareto Optimization.

‘Personality Insights’ can establish a subject’s personality traits after reading just 3500 words written by them, while the newly released ‘Visual Insights’ will “extract” interests, activities and hobbies from publicly available photos and videos.

And the first successful products based on these new services have already started to appear. IBM customer Carney Labs has built a platform that helps schools learn about a student’s personality characteristics in order to build them a career roadmap, which will now be rolled out across all high schools in Virginia.

Meanwhile iDAvatars relied on Watson’s language APIs to create a virtual assistant that can help patients with chronic diseases monitor and control their condition - an approach which is now evaluated by the US Veterans Health Administration.

Other case studies presented by IBM include using Watson to automate the recruitment process, train athletes and even recommend wine.

In order to encourage more interest in the platform, IBM has today released its largest ever set of Watson Developer Cloud services, designed to reduce the time required to integrate its APIs and data sets into mobile and cloud applications. The company has also previewed the IBM Watson Knowledge Studio, which will put the power of the platform into the hands of the people with limited understanding of IT.

And finally, IBM announced it is going to establish a new Watson Hub in the SoMa neighborhood of San Francisco. This will place it in the heart of the Silicon Valley start-up ecosystem, where it hopes it to connect with developers, venture capital groups and academic experts. IBM will continue investing into Watson start-ups from its own $100 million seed fund.

The location will also serve as the new global headquarters for IBM Commerce, which will directly adapt some of the Watson services for media and marketing industries.

Just a few months ago, I wrote an opinion about Watson’s question and answer functionality. It appears that things got a lot more terrifying since.