Given the magnitude of the p53 challenge, as discussed in a previous post, what p53 researcher Gerard P. Zambretti has described as “truly overwhelming”—trying to stay on top of the clinical literature, getting from gene to pathway, then pathway to pharmaceuticals, all toward eradicating and possibly preventing cancer.
Well, leave it to the people at IBM Research Labs to give it a computational go.
To let Watson—their Super Computing platform with its brute force ability to sort data, its ability to read 800 million pages per second—tackle p53. Hopeful that the raw power of its analytics might prove predictive, even prescriptive, toward defeating cancer.
As relayed in an article in The New York Times, the IBM p53 project was conducted in collaboration with biologists and data scientists at Baylor University. With more than 70,000 papers published on p53 (a human might be able to read five articles a day, meaning it would take nearly 38 years for her to read them all), Watson scanned and sorted the content of all the articles in an effort to help uncover new insight into the workings p53.
And in large part, Watson’s “thought work” worked.
Watson made previously unrecognized connections and insights, enabling researchers to identify six proteins (candidate kinases) potentially to target for additional p53 research. Watson’s role is an example of “automated hypothesis generation based on mining scientific literature”—as was summarized in the final write-up of the project, and emblematic of the burgeoning field of machine-based “deep learning.” (A video lecture of the IBM Watson p53 findings can be found here.)
So while IBM Watson might have helped humanity take a meaningful step forward toward decoding p53, further understanding its complex mechanisms, don’t get out the Nobel Prize in Medicine just yet.
The next blog post on p53 will discuss how Kevetrin is one of the few, and among the most promising, p53-modulating drugs currently being tested in human trials.