Blogs

RedMonk

Skip to content

Beyond Jeopardy! with IBM Watson – Quick Analysis

Packed Watson watching at IBM Austin

Seeing a computer play two humans at Jeopardy! is a lot more entertaining than I thought it’d be. I’d been ignoring most of the hoopla around Watson figuring it for a big, effective PR campaign on IBM’s part. It is that for certain, and good on them for doing it. I’ve been more interested in what practical and “work-place” applications the technology behind Watson has, and I got a little bit of that along with some other interesting tidbits at a Watson event this week at IBM’s Austin campus.

The Technology Used

In addition to IBM PR and AR reaching out to me, the Apache Software Foundation sent me info on the Hadoop and UIMA software being used by Watson:

The Watson system uses UIMA as its principal infrastructure for component interoperability and makes extensive use of the UIMA-AS scale-out capabilities that can exploit modern, highly parallel hardware architectures. UIMA manages all work flow and communication between processes, which are spread across the cluster. Apache Hadoop manages the task of preprocessing Watson’s enormous information sources by deploying UIMA pipelines as Hadoop mappers, running UIMA analytics.

The ASF press release is actually jammed with a lot of “how it works” info.

Additionally, Watson is run on POWER7 machines with Linux, one of IBM’s exotic (but revenue pulling – $1.35B last quarter by TPM’s estimates) platforms. I was wondering why the team chose POWER, and though I didn’t get a chance to ask, once of the IBM’ers I was sitting next to said that the cooling ability of POWER machines meant they could pack more of them into the Watson cluster(s).

Here’s a brief hardware description from an overview whitepaper:

Early implementations of Watson ran on a single processor, which required two hours to answer a single question. The DeepQA computation is embarrassing parallel, however, and so it can be divided into a number of independent parts, each of which can be executed by a separate processor. UIMA-AS, part of Apache UIMA, enables the scale-out of UIMA applications using asynchronous messaging. Watson uses UIMA-AS to scale out across 2,880 POWER7 cores in a cluster of 90 IBM Power® 750 servers. UIMA_AS manages all of the inter-process communication using the open JMS standard. The UIMA-AS deploy- ment on POWER7 enabled Watson to deliver answers in one to six seconds.

Watson harnesses the massive parallel processing performance of its POWER7 processors to execute its thousands of DeepQA tasks simultaneously on individual processor cores. Each of Watson’s 90 clustered IBM Power 750 servers features 32 POWER7 cores running at 3.55 GHz. Running the Linux® operating system, the servers are housed in 10 racks along with associated I/O nodes and communications hubs. The system has a combined total of 16 Terabytes of memory and can operate at over 80 Teraflops (trillions of operations per second).

During Q&A an audience member asked if Watson could do better if it took longer to answer questions. In the game, of course, Watson is trying to answer questions as quickly as possible. The answer was, yes. And, in fact, Watson already does this: it’s actually running two processes to answer a question:

  1. The first is a quick process that favors speed instead of accuracy. This fast process is used by Watson to see if it should buzz in at all.
  2. The second is a longer process that favors accuracy and is the process used to actually answer questions.

So, you’d think, at the start of each question, Watson spins up these two processes, handing the real answer off to the one that gets a few more seconds.

There’s a 6 page whitepaper on the POWER7 (and software) angles of Watson over at IBM, tragically, you have to lead-gen your way into it, but it’s worth the typing if you’re interested.

Open Source

What I find interesting here is the big reliance on open source software for this impressive Big Data application. The innovations are interesting on their own, but from a “how do I apply this to my situation?” perspective, in theory, the fact that it’s open source opens the possibilities of using the underlying technology to a wider set of people, if only because it’s cheaper than proprietary options.

For the IBM Systems & Technology Group (STG, who produces and sells all the hardware IBM has), it’d be gravy: why spend all that money of software when you can spend it on hardware? (To be fair, for sometime now and especially with Software Group [SWG] head-honcho Steve Mills running both STG and SWG, IBM would prefer to collect on both types of -ware.)

It’s part of what John Willis would call “The Cambrian Cloud Computing Explosion.” In my words: there’s an excess of technological innovation at affordable prices (the big difference) out there just waiting for business demand.

Applications beyond Trivia

In addition to the technologies used, the most commonly asked question around Watson has been what other uses. As one of the professors at the Austin event said, what they wanted to do was have a system where “you give a question, and it comes up with a specific answer, not just a [list of documents like Google].” That should remind people of what WolframAlpha is trying to do (in fact, see an in-depth comparison).

Dealing with unstructured text (much of what we humans produce) has always been difficult. Getting “computers” to understanding the nuance in human questions has also always been hard – I can barely understand my UK-dialected fellow English speakers at times, I wonder how a computer gets by? Part of what Watson does is prove advanced in both of those. The costs for this initial run (and those that have come before it) are high, for sure, but watching that thing zoom through oddly phrased questions on TV is pretty amazing.

The IBM folks sent along some possible applications post-Jeopardy!:

Making better decisions- Companies relate to the problem of data-overload. Potential applications for Watson are:

  • Healthcare and Life Sciences – Diagnostic Assistance, Evidence-Based, Collaborative Medicine. More, as quoted by Michael Cooney: “… a doctor considering a patient’s diagnosis could use Watson’s analytics technology, in conjunction with Nuance’s voice and clinical language understanding solutions, to rapidly consider all the related texts, reference materials, prior cases, and latest knowledge in journals and medical literature to gain evidence from many more potential sources than previously possible. This could help medical professionals confidently determine the most likely diagnosis and treatment options.”
  • Tech Support, Help-desk, Contact Centers – Enterprise Knowledge Management (looking stuff up, documenting it) and Business Intelligence – Watson’s analytics ability generates meaningful and actionable insights from data – in real time.

Heathcare is the most cited industry for application that I’ve come across. As an analyst presentation on Watson said, providers could ask Watson questions like “What illness presents the following symptoms…?” And check out more from Mike Martin on the healthcare angle.

A post from Louis Lazarus over at “Citizen IBM” about using Watson in the non-profit sector ads some more possible uses:

It’s not hard to imagine how the technology could be used to help triage health patients, or field phone calls placed to municipal quality-of-life hotlines, or assist teachers in helping to score complex essays on tests, or help provide information to disaster survivors.”

Check out this IBM video for some more possibilities discussion.

Injecting UX into AI

Several people have alluded that part of what’s special here is the interface – how humans use – the technology. Coming up with just one, or a handful, of definitive answers over a massive body of content is no doubt helpful – going to wikipedia when you know a topic is generally faster than simply searching Google (esp. considering all the spam-crap it’s loaded up with on general topics).

In the health-care sector, as one Enterprise Irregular said, doctors often find themselves in wikipedia instead of the better, official references simply because it’s easier to take out your iPhone and look up the topic there. This is one of the under-appreciated aspects of “the consumerization of IT”: realizing that if you make your user’s life easier (focus on UX and usability), the overall software will be more valuable because (a.) users will use it, and, (b.) they’re be more productive using it. Speed is a feature here (how many times has someone at a call center told you “the computer is being slow, please wait”) but honing workflows to be help is too. And when it comes to helping find the answer instead of a pile of crap from a knowledge base, that’s huge.

Getting your hands on it

The question, as with any whiz-bang technology, is a depressing: so, how much is that gonna cost me? Hopefully, the open source angle helps drive down the cost, but the hardware needs are still high. Part of the reason to build Watson on POWER7, IBM says, was that the systems are commercially available, as opposed to the custom-built machine used for their previous AI, DeepBlue. Perhaps there’s some help from cheap cloud infrastructure, but I’d wager you’d be sacrificing speed.

It’s fun to watch that polite flat screen beat human at buzzing in, but it’ll be even more interesting watching the technology be industrialized for the mainstream.

Also, you can check out my quick debriefing recording of the event.

Update: Ideas from John Arley Burns

An old friend of mine, John Arley Burns, suggested some possible uses over in Facebook:

  1. a google labs plugin that returns watson search results alongside normal results, maybe a watson tab
  2. watson was not connected to the internet – connect it to a webcrawler and let it give you answers
  3. watson’s search results, instead of being a list of sites like google, will be a list of hypothesis for the answer, in order of descending cofindence, as the reasoning tab on the TED lecture showed
  4. i was disappointed that it was getting the information electronically instead of via understanding what was being said – hook watson up to a speech processor so it can crawl audio content as well
  5. hook it up to a visual pattern recognizer – IBM already has one of these – and let it crawl images and videos so it can begin to form semantic constructs around them as well
  6. put it on the cloud for long-running questions you could submit in batch jobs, such as, here’s all my research data, i want you to tell me how many nanotubes i should use for this circuit layer
  7. give it long-running backend goals at low priority, as with SETI@home, that serve a socially useful function
  8. allow it to rank importance in recent semantic hypothesis, so that important new items it has with high confidence can be placed on an always-updated news page: what’s watson learning now
  9. feed it news wires so that it can answer time-dependent questions about current and just-now events
  10. connect it to incoming data feeds at all air control towers so that it can reason where probable collisions or bad weather encounters may occur, and automatically warn pilots
  11. connect it to flight schedules, stock prices, pipeline meters, so that it can form a current world view of the instantaneous state of reality
  12. allow it to improve itself by testing program hypothesis, evaluating if they cause its answers to be more or less correct, faster, higher confidence, and then updating to new code if it performs better than previous code (using genetic algorithms, perhaps)

Disclosure: IBM is a client, as is ASF and Cloudera.

Categories: Companies, Enterprise Software, Ideas, Marketing, Open Source, Quick Analysis, The New Thing.

Tags: , , , , ,

Comment Feed

10 Responses

  1. Since you are collecting the bits that Watson is built on, I'll also point you in the direction of the Juniper network that ties it all together:

    Powering the networking required for Watson are 1 IBM J16E (EX8216) switches populated with 15 10 GbE line cards and 1 GbE line card, as well as 3 IBM J48E (EX4200) switches in a virtual chassis configuration, all running Juniper’s Junos network operating system.

    More here [Juniper blog] http://bloga.tw/frFT2h
    And here [@stu Wikibon] http://bloga.tw/ex3yZ8

  2. Thanks, Abner!

  3. One minor correction – The Watson design point for response was 3 seconds, not 6 seconds. Per IBM's research, competitive players average a 3 second "buzz." Depending on the length of the clue, there's a range of response times (because a player can not buzz until after the clue has been completely read (to the last syllable).

  4. Thanks, Don!

  5. There is an interesting TED talk featuring Steven Baker (author of Final Jeopardy!), Kerrie Holley (IBM Fellow looking for Watson’s next job), Dr. Herbert Chase (Columbia University Professor of Clinical Medicine) and Dr. David Ferrucci (IBM Watson Principal Investigator).

    What surprised me most was what Kerrie Holley said. The use of Watson suggested by him does not have any relation to the core technology. He is talking about analytics, route optimization etc – but Watson is about language comprehension.

    http://setandbma.wordpress.com/2011/03/01/how-int

  6. 02.05.2008· have you ever wondered just what 30 really like questions you ought to request to see in the event you along with your companion are very within love effectively, you’ll find countless issue to inquire about via …answer question

Continuing the Discussion

  1. […] This post was mentioned on Twitter by cote, Stacey Higginbotham. Stacey Higginbotham said: RT @cote: What's the use of Watson after doing trivia? I try to round up some answers in this Quick Analysis: http://bit.ly/fnjx7e […]

  2. […] IBM be successful at “applications beyond trivia” as Michael Cote terms it? Will both HP and IBM be able to drive much needed productivity through their bloated […]

  3. […] in the field of AI? Or, is the win meaningless and does not imply anything significant? What is the future of this technology? How far are we from understanding how the brain […]

  4. […] in the field of AI? Or, is the win meaningless and does not imply anything significant? What is the future of this technology? How far are we from understanding how the brain […]