The Central Role Of User Experience Design In Scientific Modeling

Big-Simulation-BorderedData is not knowledge.

Data can reveal relationships between events – correlations that may or  may not be causal in nature; but by itself, data explains nothing without some form of conceptual model with which it can be assimilated into an intellectual framework that allows one to reason about it.

Computational modeling is not in the mainstream of life science research in the way that it is in other fields such as physics and engineering. And while all scientific concepts are implicitly models, most biologists have had relatively little experience of the kind of explicit modeling that we’re talking about here. In fields like biology where exposure to computational models is more limited, there is a tendency to consider their utility largely in terms of their ability to make predictions – but what often gets overlooked is the fact that models also facilitate the communication and discussion of concepts by serving as cognitive frameworks for understanding them.

Next to the challenge of representing the sheer complexity of biological systems, this cognitive element of modeling may be the single biggest reason why modeling is not in the mainstream of the life sciences. Most biological models use idioms borrowed from other fields such as physics, where modeling is both more mature, and firmly in the mainstream of research.

For a model to be truly useful and meaningful in a particular field of intellectual activity, it needs to support the conceptual idioms by which ideas and knowledge are shared by those in the field.

In other words, it should be possible to put questions to the model that are couched in the conceptual idiom of the field, and to receive similarly structured answers. To the extent that this is not true of a model, there will be some degree of cognitive disconnect between the model and the user which will impede the meaningful interaction of the user with the model.

Nowhere can this be more clearly seen than in the field of software design. Software applications make extensive use of cognitive models in order to facilitate a meaningful and intuitive interaction with the user. As a very simple example – software that plays digital music reproduces the play, forward and reverse buttons that were common on physical media devices like cassette and VHS players. This is because almost everybody has the same expectations about how these interface components are to be used, based upon their prior experience with these devices. As an aside, it’s interesting to reflect on the fact that while the younger generation may see these interaction motifs everywhere in the user interfaces of software media players, many of them will never have seen the original devices whose mechanical interfaces inspired their design.

blog-graphics-2015.001

The psychology and design that determines these interactions with the objects and devices that we use, is such an important area of study that it has given rise to an entire field that is commonly referred to as User Experience (UX) or User Experience Design. UX lies at the intersection of psychology, design and engineering and is concerned with the way that humans interact with everything in the physical world from a sliding door to the instrument panel of an airliner – and of course, their analogs in the virtual world; web browsers, electronic books, photo editing software, online shopping carts and so on.

Affordances and signifiers are the currency of UX design, facilitating the interaction between the user and the object or software. If you consider an affordance as a means of interaction (like the handle on a door for example), signifiers are signs for the user that suggest how the affordances might work. To use our very simple door handle example – a handle that consists of a flat metal plate  on the door suggests that the door be pushed open. A handle consisting of a metal loop more strongly suggests that the door should be pulled open. For the purposes of illustration, this is just a very superficial and simple example of the kind of cognitive facilitation that effective UX design can support. By contrast, consider the role that UX design plays in highly complex, human-built systems whose interactions with the user are predicated on multiple and often interdependent conceptual models, each of enormous complexity in its own right. In some cases, a single, erroneous interaction with such a system might even destroy the system and/or lead to the loss of human life.

So what does all of this have to do with scientific modeling?

By facilitating a cognitive connection between the user and an object, a device or a piece of software, effective UX design makes the interaction easier, more intuitive and more meaningful. Insofar as a computational model is being used to develop a conceptual framework that explains data, effective UX design similarly facilitates the cognitive leap from data to knowledge.

To be very clear, what we’re discussing here is user experience writ large. It encompasses considerations of the user experience design for any software that a researcher might be using to implement a model, but also a great deal more besides. The conceptual model being used to describe a biological system has a user experience component in and of itself, that when it works, provides a cognitive handle by which the system being modeled can be understood.

In a non-computational approach to understanding the system for example, this might be manifest in something as simple as the ability to draw an explicative diagram of the system on a piece of paper. In biology, think for example of the kind of pathway diagrams that biologists often draw to explain cell signaling (there’s even one in this article). In physics, the Feynman diagram that is used to intuitively describe the behavior of subatomic particles, is a perfect example of a piece of brilliant user experience design that provides a cognitive handle on a complex conceptual model.

In the case then, where the conceptual model is being implemented on a computational platform – to the extent to which the conceptual model can be mapped to the software, areas of overlap between the user experience design of the model and of the software are inevitable and often even inextricable.

As we have already seen, a very common theme in the user experience design of software, is the replication of components of the physical world that create an intuitive and familiar framework for the user – think for example of the near universal adoption of conventions like files and folders in computer file-handling systems, borrowed directly from office environments that pre-date the use of computers. Such an approach can be a very useful tool for enhancing the user experience.

As the VP of Biology at a venture-funded software startup building a collaborative, cloud-computing platform to model complex biological pathways, a major part of my role in the company was to serve as the product manager for the software. In practice, this actually comprised two roles. The first was an internal role as the interface between the company’s biology team tasked with developing the applications for our product, and our software engineering team who were tasked with building the product. The second was an external-facing role as a product evangelist and the liaison between our company and the life science research community – the potential client base for whom we were building our product.

One component of our cloud-computing platform was an agent-based simulation module for modeling cell signaling pathways. The ‘players’ in these simulations were as you would expect, mostly proteins involved in cell signaling pathways – kinases, phosphatases etc. and any kind of phosphoprotein whose cellular activity is typically modulated by the kind of post-translational modification events that proteins like kinases and phosphatases mediate.

As a simulation proceeded on the cloud, it could be tracked by the user through a range of different visualizations in their web browser. One of these displayed the concentrations of the different molecular species present in the simulation, over time. This was initially presented as a graph like this:

Big-Simulation-Bordered-graph-crop

But if you think about the way that a biologist in the laboratory would do this experiment, this presentation of the results, while being information-rich, would not be what he or she was used to. The analogous lab experiment would probably involve sampling the reaction mixture at regular intervals and for example, running these aliquots as a time series on a gel to visualize their fluctuations over the course of the experiment.

My initial proposal that we add a visual element to the graph that reproduced what the biologist would see if they were to run the reaction mixture from a particular time point on a gel, was met with some degree of skepticism from the software engineers .

To be fair, it has to be said at this point that any good software engineering team (consisting of developers, business analysts, product managers etc.) always will (and should) set a high bar for the approval of new features in the code, especially where there is any kind of significant cost in time, money or resources required for their implementation. We were fortunate in our company, to have just such an excellent software engineering team and so their initial resistance to this idea was not wholly unexpected. The main argument against it was that it would not be an information-rich visual presentation of the simulation results in the way that the graph already was, and furthermore, that it was redundant since this information was already presented at a much higher resolution in the graph.

When however, in my capacity as external liaison with our potential client base,  I tested the response of the life science research community to a mock-up of this feature, the results were amazingly positive.

Big-Simulation-crop

We asked biologists who agreed to be interviewed, to compare the version of the simulation interface that contained only the graph, with a mock-up of an updated version (shown above), that also contained a simulated Western blot display with a time slider that could be moved across the graph to show what the Western blot gel would look like at each sampled time point.

Their responses were striking. What we heard most often from them (and I’m aggregating and paraphrasing the majority response here), was that the version of the interface with the Western blot display made a great deal more sense to them because it helped them to make the mental leap between the data being output from the model and what the model was actually telling them. Perhaps most importantly – in their minds it also reinforced the idea of the computational simulation as a virtual experiment whose results could help guide their decisions about which physical experiments to do in the lab.

Despite this new visualization not being information-rich as the software engineers had rightly pointed out – in its ability to frame the output from the simulation model in an idiom that was meaningful to the biologist,  it created a richer and deeper cognitive connection between the biologist-modeler and the biology that was being represented and explored in the model.

Recognizing that if modeling is ever to really become a part of the mainstream in life science research in the way that it is in physics, we took very seriously, the idea of doing biological modeling in an idiom that is appropriate for biology. This idea permeated every aspect of the development of our collaborative computational modeling platform, especially since it was also clear from our own product and market research, that biologists were no more willing to become mathematicians or computer scientists to use models in their own research, than people were willing to become mechanics in order to drive cars.

Signaling-CartoonTake a look for example, at this cartoon a biologist drew of a cell signaling pathway (thanks Russ). It illustrates perfectly the paradigm of an interconnected network of signaling proteins that is in essence, the consensus model in the biology community for how cell signaling works. At some level, it matters little that we cannot consider this to be a realistic, physical model of cell signaling since it implies the existence of static ‘biological circuits’ that in reality do not exist in the cell. In using this model however, biologists are not suggesting this at all. This model does a very good job of representing conceptually, the network of interactions that determine the functional properties of a cell signaling pathway.

There are some obvious intuitive benefits to this model (and many more very subtle ones). For example, if we were to try to trace the network edges from one protein (node) to another and discovered that they were not connected by any of the other proteins, we could infer that none of the states available to the first protein, could ever have an influence on the states of the second.

Raf-Mek-Erk-CMHere for comparison, is the analogous representation of that same cell signaling pathway, assembled on our cloud computing platform using a set of lexical rules that describe each of the ‘players’ and their interactions. Even the underlying semantic formalism that we used as as a kind of biological assembly language to represent the players (usually proteins) and their interactions, was couched in terms of a familiar and relatively small set of biological events (binding, unbinding, modification etc.) that are in themselves, sufficient to represent almost everything that happens in a cell at the level of its signaling pathways.

In summary then, insofar as computational models facilitate thinking and reasoning about the biological systems we study and collect data from, they can help us much more effectively if they allow us to work in the idioms that are familiar and appropriate to our field. This notion can be more fully grasped by considering its antithesis – the use of ordinary differential equations (ODEs) to model biological systems, which still tends to be the dominant paradigm for biological modeling despite being an exceedingly opaque, unintuitive and largely incompatible approach for modeling systems at a biological scale.

It is also clear that software developers need to work closely with experts who have specialized domain knowledge if they are to create computational modeling platforms that will not only be effective for their particular domain, but also widely adopted by its practitioners. In the case of biology, it was clear to us when we were developing our modeling platform, that its success would depend in no small part, on the appeal that it could make to the imagination and intuition of the biologist. With computational modeling as with software development, even the most meticulously crafted of tools will have little or no impact or utility in its field if a cognitively dissonant user experience results in it rarely or never being used.

© The Digital Biologist

An “Uncertainty Principle” for traditional mathematical approaches to biological modeling

If you’re a biological modeler, chances are there are two words that keep you up at night and that on occasion, might even have given you serious pause to question the wisdom of your choice of profession.

Those two words are combinatorial complexity.

uncertainty-principle1For anyone not entirely familiar with this concept, imagine one of the simplest possible biomolecular systems, with a kinase K that can bind to and phosphorylate a substrate S at either of two positions a and b, as shown in this first diagram. Even this very simple system can produce 13 possible molecular species: K unbound, S unbound in one of its 4 phosphorylation states, K bound at a with S in one of its 4 states, K bound at b with S in one of its 4 states.

Taking the traditional biological modeling approach of using ordinary differential equations (ODEs), you would therefore have to write 13 rate equations to describe this system. So far so good.

uncertainty-principle2But now let’s add the phosphatase P that dephosphorylates the sites a and b on S. If we do a similar analysis of the possible molecular species for our new system, taking into account now, the possible bound and unbound states of P and K on S, we discover that the addition of this single agent P yields 21 new molecular species in addition to the 13 that we already had! Furthermore, since we are working with a model of interdependent ODEs, we will also need to rewrite our original set of rate equations.

If it takes all this work to describe what is almost the simplest imaginable kind of system, how many equations would we need for a real biological system? How many rate equations would we need to describe the canonical epidermal growth factor receptor (EGFR) signaling pathway for example?

Hold on to your hats … drum roll … somewhere north of 1030 equations.

Yikes!

All this said however, biological modelers have built models of complex cellular systems like the EGFR pathway, so how on earth have they done it? The answer is by simplifying the system, typically by either ignoring features that are presumed to minimally impact the system’s behavior, or by aggregating features to create a less granular description of the system, again under the presumption that this will not significantly affect the model’s behavior.

The danger inherent in such approaches is that they require a set of a priori hypotheses about what are and what are not, the important features of the system i.e. they require a decision about what aspects of the model will least affect its behavior before the model has ever been run.

The famous Uncertainty Principle that we all learned in high school physics states that it is impossible to simultaneously determine with any accuracy, both the position and the momentum of an electron. A recasting of this principle for traditional biological modeling might be

“Scope or resolution, but not both at the same time”

One could argue that whereas the original physical principle is absolute, in the case of biological modeling the limitation is one of technology – “If we had a big enough, fast enough computer …” etc. Perhaps, but when you compare the storage and processing time required to solve a system of 1030 equations with the scale of our universe, the biological modeling version of the principle seems pretty darn absolute to me.

Did I hear someone say “quantum computer”?

Just let me know when they’ve built one that could address this problem and I will gladly publish an update 🙂

© The Digital Biologist