Big Data Does Not Equal Big Knowledge

Raf-Mek-Erk-IM-bordered.001

… the life science Big Data scene is largely Big Hype. This is not because the the data itself is not valuable, but rather because its real value is almost invariably buried under mountains of well-meaning but fruitless data analytics and data visualization. The fancy data dashboards that big pharmaceutical companies spend big bucks on for handling their big data, are for the most part, little more than eye candy whose colorful renderings convey an illusion of progress without the reality of it.

Read the full article on LinkedIn.

© The Digital Biologist

Theranos: A unicorn with real potential or a horse in a costume?

Theranos, the troubled healthcare startup (it feels faintly ridiculous to use that term for a company valued at $9 billion),  is now at something of a crossroads with the regulatory agencies upon whose approval its entire business model ultimately depends. Following months of apparent obfuscation and stonewalling about how (and how well) its disruptive blood testing technology works,  a scathing Wall Street Journal article exposed a degree of potential fraud and deceit surrounding this much lauded technology, that nobody outside Theranos could have imagined (especially since most of the lauding was coming from the company’s own PR machine).

Despite the clamor for Theranos to release for peer review, some of the findings and data it has generated in the course of developing its blood-testing technology that it claims can replace the use of hypodermic needles with a simple finger prick, the technology is still largely a black box to outsiders. Investors and industry observers want to know if the technology’s potential squares with the company’s stratospheric valuation, but more importantly – regulatory agencies, healthcare providers and their patients need to know if the medical diagnostic tests that use this technology, actually work and are accurate and reliable.

There’s a great deal more than money at stake here.

This week’s regulatory call for Theranos could literally make or break the company depending upon which way it goes. Failure to be in regulatory compliance could bring with it, a host of new problems for the struggling company including crippling fines and the inability to operate until such time as it can demonstrate that it has addressed the problems raised by the regulatory agencies. Most damaging of all however, this would be yet another huge blow to its already strained credibility with investors and healthcare provider partners, some of whom have already withdrawn or suspended their relationships with Theranos over doubts about the efficacy and accuracy of its blood tests. Beyond the problems that this could create for Theranos itself, many industry observers fear the effects that it could have on the entire sector if a contagion of doubt and panic were to grip investors and financiers, potentially stemming the flow of biotechnology venture funding and capital.

© The Digital Biologist

The Central Role Of User Experience Design In Scientific Modeling

Big-Simulation-BorderedData is not knowledge.

Data can reveal relationships between events – correlations that may or  may not be causal in nature; but by itself, data explains nothing without some form of conceptual model with which it can be assimilated into an intellectual framework that allows one to reason about it.

Computational modeling is not in the mainstream of life science research in the way that it is in other fields such as physics and engineering. And while all scientific concepts are implicitly models, most biologists have had relatively little experience of the kind of explicit modeling that we’re talking about here. In fields like biology where exposure to computational models is more limited, there is a tendency to consider their utility largely in terms of their ability to make predictions – but what often gets overlooked is the fact that models also facilitate the communication and discussion of concepts by serving as cognitive frameworks for understanding them.

Next to the challenge of representing the sheer complexity of biological systems, this cognitive element of modeling may be the single biggest reason why modeling is not in the mainstream of the life sciences. Most biological models use idioms borrowed from other fields such as physics, where modeling is both more mature, and firmly in the mainstream of research.

For a model to be truly useful and meaningful in a particular field of intellectual activity, it needs to support the conceptual idioms by which ideas and knowledge are shared by those in the field.

In other words, it should be possible to put questions to the model that are couched in the conceptual idiom of the field, and to receive similarly structured answers. To the extent that this is not true of a model, there will be some degree of cognitive disconnect between the model and the user which will impede the meaningful interaction of the user with the model.

Nowhere can this be more clearly seen than in the field of software design. Software applications make extensive use of cognitive models in order to facilitate a meaningful and intuitive interaction with the user. As a very simple example – software that plays digital music reproduces the play, forward and reverse buttons that were common on physical media devices like cassette and VHS players. This is because almost everybody has the same expectations about how these interface components are to be used, based upon their prior experience with these devices. As an aside, it’s interesting to reflect on the fact that while the younger generation may see these interaction motifs everywhere in the user interfaces of software media players, many of them will never have seen the original devices whose mechanical interfaces inspired their design.

blog-graphics-2015.001

The psychology and design that determines these interactions with the objects and devices that we use, is such an important area of study that it has given rise to an entire field that is commonly referred to as User Experience (UX) or User Experience Design. UX lies at the intersection of psychology, design and engineering and is concerned with the way that humans interact with everything in the physical world from a sliding door to the instrument panel of an airliner – and of course, their analogs in the virtual world; web browsers, electronic books, photo editing software, online shopping carts and so on.

Affordances and signifiers are the currency of UX design, facilitating the interaction between the user and the object or software. If you consider an affordance as a means of interaction (like the handle on a door for example), signifiers are signs for the user that suggest how the affordances might work. To use our very simple door handle example – a handle that consists of a flat metal plate  on the door suggests that the door be pushed open. A handle consisting of a metal loop more strongly suggests that the door should be pulled open. For the purposes of illustration, this is just a very superficial and simple example of the kind of cognitive facilitation that effective UX design can support. By contrast, consider the role that UX design plays in highly complex, human-built systems whose interactions with the user are predicated on multiple and often interdependent conceptual models, each of enormous complexity in its own right. In some cases, a single, erroneous interaction with such a system might even destroy the system and/or lead to the loss of human life.

So what does all of this have to do with scientific modeling?

By facilitating a cognitive connection between the user and an object, a device or a piece of software, effective UX design makes the interaction easier, more intuitive and more meaningful. Insofar as a computational model is being used to develop a conceptual framework that explains data, effective UX design similarly facilitates the cognitive leap from data to knowledge.

To be very clear, what we’re discussing here is user experience writ large. It encompasses considerations of the user experience design for any software that a researcher might be using to implement a model, but also a great deal more besides. The conceptual model being used to describe a biological system has a user experience component in and of itself, that when it works, provides a cognitive handle by which the system being modeled can be understood.

In a non-computational approach to understanding the system for example, this might be manifest in something as simple as the ability to draw an explicative diagram of the system on a piece of paper. In biology, think for example of the kind of pathway diagrams that biologists often draw to explain cell signaling (there’s even one in this article). In physics, the Feynman diagram that is used to intuitively describe the behavior of subatomic particles, is a perfect example of a piece of brilliant user experience design that provides a cognitive handle on a complex conceptual model.

In the case then, where the conceptual model is being implemented on a computational platform – to the extent to which the conceptual model can be mapped to the software, areas of overlap between the user experience design of the model and of the software are inevitable and often even inextricable.

As we have already seen, a very common theme in the user experience design of software, is the replication of components of the physical world that create an intuitive and familiar framework for the user – think for example of the near universal adoption of conventions like files and folders in computer file-handling systems, borrowed directly from office environments that pre-date the use of computers. Such an approach can be a very useful tool for enhancing the user experience.

As the VP of Biology at a venture-funded software startup building a collaborative, cloud-computing platform to model complex biological pathways, a major part of my role in the company was to serve as the product manager for the software. In practice, this actually comprised two roles. The first was an internal role as the interface between the company’s biology team tasked with developing the applications for our product, and our software engineering team who were tasked with building the product. The second was an external-facing role as a product evangelist and the liaison between our company and the life science research community – the potential client base for whom we were building our product.

One component of our cloud-computing platform was an agent-based simulation module for modeling cell signaling pathways. The ‘players’ in these simulations were as you would expect, mostly proteins involved in cell signaling pathways – kinases, phosphatases etc. and any kind of phosphoprotein whose cellular activity is typically modulated by the kind of post-translational modification events that proteins like kinases and phosphatases mediate.

As a simulation proceeded on the cloud, it could be tracked by the user through a range of different visualizations in their web browser. One of these displayed the concentrations of the different molecular species present in the simulation, over time. This was initially presented as a graph like this:

Big-Simulation-Bordered-graph-crop

But if you think about the way that a biologist in the laboratory would do this experiment, this presentation of the results, while being information-rich, would not be what he or she was used to. The analogous lab experiment would probably involve sampling the reaction mixture at regular intervals and for example, running these aliquots as a time series on a gel to visualize their fluctuations over the course of the experiment.

My initial proposal that we add a visual element to the graph that reproduced what the biologist would see if they were to run the reaction mixture from a particular time point on a gel, was met with some degree of skepticism from the software engineers .

To be fair, it has to be said at this point that any good software engineering team (consisting of developers, business analysts, product managers etc.) always will (and should) set a high bar for the approval of new features in the code, especially where there is any kind of significant cost in time, money or resources required for their implementation. We were fortunate in our company, to have just such an excellent software engineering team and so their initial resistance to this idea was not wholly unexpected. The main argument against it was that it would not be an information-rich visual presentation of the simulation results in the way that the graph already was, and furthermore, that it was redundant since this information was already presented at a much higher resolution in the graph.

When however, in my capacity as external liaison with our potential client base,  I tested the response of the life science research community to a mock-up of this feature, the results were amazingly positive.

Big-Simulation-crop

We asked biologists who agreed to be interviewed, to compare the version of the simulation interface that contained only the graph, with a mock-up of an updated version (shown above), that also contained a simulated Western blot display with a time slider that could be moved across the graph to show what the Western blot gel would look like at each sampled time point.

Their responses were striking. What we heard most often from them (and I’m aggregating and paraphrasing the majority response here), was that the version of the interface with the Western blot display made a great deal more sense to them because it helped them to make the mental leap between the data being output from the model and what the model was actually telling them. Perhaps most importantly – in their minds it also reinforced the idea of the computational simulation as a virtual experiment whose results could help guide their decisions about which physical experiments to do in the lab.

Despite this new visualization not being information-rich as the software engineers had rightly pointed out – in its ability to frame the output from the simulation model in an idiom that was meaningful to the biologist,  it created a richer and deeper cognitive connection between the biologist-modeler and the biology that was being represented and explored in the model.

Recognizing that if modeling is ever to really become a part of the mainstream in life science research in the way that it is in physics, we took very seriously, the idea of doing biological modeling in an idiom that is appropriate for biology. This idea permeated every aspect of the development of our collaborative computational modeling platform, especially since it was also clear from our own product and market research, that biologists were no more willing to become mathematicians or computer scientists to use models in their own research, than people were willing to become mechanics in order to drive cars.

Signaling-CartoonTake a look for example, at this cartoon a biologist drew of a cell signaling pathway (thanks Russ). It illustrates perfectly the paradigm of an interconnected network of signaling proteins that is in essence, the consensus model in the biology community for how cell signaling works. At some level, it matters little that we cannot consider this to be a realistic, physical model of cell signaling since it implies the existence of static ‘biological circuits’ that in reality do not exist in the cell. In using this model however, biologists are not suggesting this at all. This model does a very good job of representing conceptually, the network of interactions that determine the functional properties of a cell signaling pathway.

There are some obvious intuitive benefits to this model (and many more very subtle ones). For example, if we were to try to trace the network edges from one protein (node) to another and discovered that they were not connected by any of the other proteins, we could infer that none of the states available to the first protein, could ever have an influence on the states of the second.

Raf-Mek-Erk-CMHere for comparison, is the analogous representation of that same cell signaling pathway, assembled on our cloud computing platform using a set of lexical rules that describe each of the ‘players’ and their interactions. Even the underlying semantic formalism that we used as as a kind of biological assembly language to represent the players (usually proteins) and their interactions, was couched in terms of a familiar and relatively small set of biological events (binding, unbinding, modification etc.) that are in themselves, sufficient to represent almost everything that happens in a cell at the level of its signaling pathways.

In summary then, insofar as computational models facilitate thinking and reasoning about the biological systems we study and collect data from, they can help us much more effectively if they allow us to work in the idioms that are familiar and appropriate to our field. This notion can be more fully grasped by considering its antithesis – the use of ordinary differential equations (ODEs) to model biological systems, which still tends to be the dominant paradigm for biological modeling despite being an exceedingly opaque, unintuitive and largely incompatible approach for modeling systems at a biological scale.

It is also clear that software developers need to work closely with experts who have specialized domain knowledge if they are to create computational modeling platforms that will not only be effective for their particular domain, but also widely adopted by its practitioners. In the case of biology, it was clear to us when we were developing our modeling platform, that its success would depend in no small part, on the appeal that it could make to the imagination and intuition of the biologist. With computational modeling as with software development, even the most meticulously crafted of tools will have little or no impact or utility in its field if a cognitively dissonant user experience results in it rarely or never being used.

© The Digital Biologist

Academic research publishing is a broken system that needs to be fixed

First_Printing_machineMany in the life sciences agree that academic research is a broken and dysfunctional system that has largely ceased to be a true meritocracy. The evidence certainly supports the broader consensus among researchers that the field is in crisis. At the heart of the problem is the current system of academic publishing. Well before U.S. life science research funding started to stagnate in recent years, the insidious  ‘Publish or Perish’  mantra has been progressively eroding the quality and integrity of academic research through its perpetuation of a faux aristocracy of elite journals and its emphasis on bogus and self-fulfilling metrics of academic success like impact factors.

By defining and measuring academic success in terms of the number of publications in these ‘elite’ journals, the research community has (perhaps unwittingly) ceded to the editors, executives and shareholders of  multi-billion dollar publishing corporations, enormous and undue influence in determining the direction of academic research. The leading academic publishing companies even have an indirect but significant influence over which research actually gets done, since it is largely an assessment of research publications that determines who gets funding and who does not.

With better research funding comes a better chance of having your research published in an elite journal. With more publications in elite journals, the better are your chances of getting research funding. This is the vicious feedback loop that sustains the status quo in academic research, creating something of an echo chamber in which the same few voices are increasingly and disproportionately represented.

Even the academic peer review process upon which the research community has always depended to eliminate bias and ensure the integrity of published research, is itself deeply flawed and at times, unequivocally corrupt.

That many of these elite journals also build their own lucrative paywalls around the results of publicly funded research has only compounded the problem. The eye-watering prices that these academic publishing companies charge for their journals play a considerable role in further draining public money from a research system that is already enduring a major funding crisis. By some estimates, the subscriptions that universities must pay for access to these journals swallow up  as much as 10% of the public research funding that they receive.  This public money is essentially being channeled away from research and into the coffers of private sector corporations.

In addition to paying for the wealth of materials and resources necessary to conduct scientific research – laboratory reagents, consumables, equipment, instruments and so on – these funds in most cases, must also cover the salaries of the researchers who are already woefully under-compensated for their experience and expertise. It is a testament to how expensive access to these journals has become, that even Harvard University, one of the wealthiest institutions of higher education in the world, recently sent a memo to its faculty members informing them that it could no longer afford the price hikes imposed by many large journal publishers.

The publishers have repeatedly tried to claim that the prices they charge scientists and institutions for access to their journals, are necessary to offset the costs incurred in their work to ensure the quality of the published research. In reality however, the major academic publishers have significantly increased their profit margins over the last few years while institutional libraries struggle to keep up with the rising costs. The University of Montreal has for example (much to the dismay of its research staff), started scaling back its journal subscriptions as its annual expenditure on academic journals has reached an unsustainable $7 million,  One of its own researchers in its School of Library and Information Science, summarized the situation very well:

The quality control is free, the raw material is free, and then you charge very, very high amounts – of course you come up with very high profit margins.

Vincent Larivière, University of Montreal

The economics of the current situation are not the only issue for academic research. The pressure to publish in order to have a successful academic career, has also given rise to a dramatic increase in the publishing of fraudulent results, and a corresponding and even more dramatic decrease in the reproducibility of academic research. To quote a 2013 report in the Economist: “A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic.” This observation by itself, is a serious indictment of the current state both of academic research, and of the peer-reviewed academic publishing process that is supposed to ensure the integrity, accuracy and honesty of published scientific research.

To be fair, there are undoubtedly other forces at work here as well – the government’s reluctance even to sustain current levels of public funding for research for example. Yet through their willingness to generate excessive profit at the expense of publicly funded research, and their inordinate and undue influence over its direction – the leading academic publishers have played a central and significant role in the steady deterioration of both the quality of academic research, and the professional lives of the scientists who pursue it.

New and better models of research publishing must of necessity, take center stage in any attempt to reform academic research.

Thankfully, researchers are increasingly starting to recognize that the current situation in academic publishing is untenable and there are already signs of a movement against it. Scientists worried about the future of their profession, have started to organize in order to give voice to their concerns. Researchers and research institutions are openly championing an end to the publication of publicly funded research in subscription journals. Interestingly and most recently, we have even started to see signs of dissent at the heart of the academic publishing industry itself. Witness for example, the spectacular mutiny of the entire editorial board of an academic journal owned by Elsevier, over the issue of allowing the journal to become Open Access.  Elsevier apparently refused to address the editors’ longstanding concerns with the result that the departing editors are now preparing to launch a directly competing Open Access journal that seems certain to reduce the readership of Elsevier’s own subscription-based publication.

In the effort to reform academic research, there are undoubtedly important battles to be fought on several different fronts, the most visible and easily understood of which is the public funding of research. In the U.S. at least, this has been an uphill battle in recent years as a result of opposing political forces and a seeming apathy in the court of public opinion, for the case to fund scientific research with public money. It is however, hard to imagine even given a windfall of additional public funding – that the quality and integrity of academic research or the lot of its researchers, could be significantly improved so long as the current system of academic publishing remains in place. Academic research can only flourish if it is able to function as true meritocracy, and that will mean (amongst other things) breaking its damaging dependence upon the current system of academic publishing for its validation.

© The Digital Biologist

The healthcare sector now has its own technology cheating scandal.

So after the whole Volkswagen scandal, now the science and technology sector has its own “caught cheating” story as well. This time it’s the turn of Theranos – a medical diagnostic company whose technology and founder have been much lauded in the media and held up as exemplars of technological innovation and progress.

Based upon some investigative journalism by the Wall Street Journal, which includes information disclosed by Theranos employees and others with inside knowledge of the secretive company’s workings, it seems that Theranos is for the most part, using other companies’ technologies for the great majority of its diagnostic blood tests, and may even have done so in order to duplicitously win FDA approval for its own much lauded but never publicly disclosed, diagnostic technology – the same proprietary technology that underpins the company’s $9 billion valuation.

In the world of technology and medicine, it is perhaps no surprise that there are always some with serious skin in the game, who are not above resorting to hype, exaggeration and outright dishonesty where there’s money at stake. The startup world is certainly no stranger to this – and now it looks like Theranos has been caught cheating in order to advance its own blood test technology, while trying to hide this from regulatory agencies and investors in a cover-up that has some parallels with the Volkswagen scandal.

Perhaps even more interesting though, is that these kinds of problems seem to be exacerbated by the cult of celebrity that surrounds those who are perceived to be the movers and shakers of the startup world. Had Theranos founder Elizabeth Holmes not been put on such a high pedestal by her investors, peers and the press, it is interesting to wonder whether she might not have been taken to task much earlier for refusing to scientifically substantiate the efficacy and accuracy of her company’s proprietary diagnostic technology.

Even in the dry and ‘objective’ world of science and technology, people still seem to need to create heroes and to build a kind of mythology around them and their work, however disconnected the stories are from the reality.

As this Wired article points out, when this kind of bubble of delusion and deception gets built around something like a social network startup, what stands to be lost is mostly money. That’s already bad enough, but when the company in question has a hand in your healthcare, what’s at stake could be much, much more serious.

Image courtesy of PostMemes

© The Digital Biologist

What Innovation Is Not

myths-of-innovationInnovation is seen to be something as American as apple pie, that everybody from the US President on down is talking about. From university presidents and corporate leaders to Silicon Valley tycoons, all agree that we need more of it. Against this background of hype, my colleague and fellow scientist Alex Lancaster finds that Scott Berkun’s book “The Myths of Innovation” is a refreshing and unpretentious take on this overused buzzword.

© The Digital Biologist

Not yet as popular as Grumpy Cat but …

Usage-Q1-2014… in the first quarter of 2014, “The Digital Biologist” was read in 70 countries around the world. So while it’s very unlikely that the high octane, rollercoaster world of computational biology will ever have the pulling power of internet memes like a comically sour-faced kitty or the lyrical stylings of Justin Bieber, what we lack in numbers we certainly make up for in geographical diversity. Can Grumpy Cat or Justin Bieber claim a following in Liechtenstein for example? Neither can we actually, but we can make that claim for Sierra Leone (if you can really call a single reader a “following” – thank you mysterious Sierra Leone Ranger, whoever you are :-).

Anyway, wherever you are visiting from, thank you all so much for your readership and don’t forget that you can also join the conversation via the LinkedIn Digital Biology Group and at our FaceBook page.

 © The Digital Biologist | All Rights Reserved 

The LinkedIn Digital Biology group passes the 1,000 member milestone

linked-digitalbiology-group-logoWhile Digital Biology will probably never be a social network mega-phenomenon on the scale of Justin Bieber or Grumpy Cat, it is pleasing to note that during this last month, membership of the Digital Biology group on LinkedIn just passed a major milestone in getting its one thousandth member. Thanks to everybody who subscribes and especially to those who participate in the regular discussions that take place in its forum.

If have a LinkedIn account, you can join the conversation here.

© The Digital Biologist | All Rights Reserved

Around the world in 30 days

Usage-Q1-2014Incredible! “The Digital Biologist” was actually read in 67 different countries around the globe over the course of the last 30 days. This map shows the readership by territory, with the darker blues corresponding to more readers (you can click on the map to enlarge it for a better view). The biggest single concentration of readers is clearly in the U.S., but “The Digital Biologist” truly has a worldwide readership now.

Thank you all so much 🙂

 © The Digital Biologist | All Rights Reserved