Science relies on model systems. Models are convenient simplifications of a more complex original form. A model train has the same shape, colour, proportions as a real train. The model moves the same way as the original: wheels rolling forwards or backwards along tracks. The model’s motion has the same constraints as the original; neither can move side-to-side or up-and-down. As such, the model is true: it accurately represents the motion of the original train.
But the model is only a substitution. It’s crucial to know when it fails to represents the original. The model train does have the same mass or density as the original. Hence it cannot represent the acceleration/deceleration of the original. The model is powered by electricity – not coal & steam – and it cannot represent the emissions of the true train. The convenience of the model outweighs these limitations. Models remain invaluable tools – so long as one is aware of their differences from the original.
I study a human pediatric leukemia. I do not, however, study sick children. Instead I use a mouse model that mimics the patients’ disease. In fact, my whole project is one model nested in another, much like the Matryoshka dolls of my Ukrainian upbringing. I mimic the patient disease in mice. I identify the mouse disease with blood tests. The test results are caused by an abnormal protein. This abnormality is the result of a DNA mutation. I detect the DNA mutation by PCR techniques.
A model in a model in a model…
Sounds precarious, no? Yet, the ability of each of these systems to represent their originals has been extensively validated. Hundreds of researchers have used them reproducibly. The findings from these models continue to track back to the original patients in the form of new treatments. Medicine’s 20th century progress would have been impossible without models. After all, would you test new treatments on patients without first proving their safety in cells, mice, and other models?
Yet, models remain simplifications. In certain situations they will inevitably fail to represent the original. The importance of knowing each model’s limitations cannot be overstated.
A few months ago a subset of my mice stopped getting sick. They repeatedly failed to obtain a patient-like disease. I ran my controls but all were consistent. The PCR test confirmed the mice still had the band corresponding to the DNA mutation (see figure). That mutation should produce the aberrant protein, which should produce the disease, which should model the patients. After 2 years of steadfast service, my model stopped working. I re-ran my PCRs, re-obtained DNA samples, repeated the blood tests. All consistently negative. What was wrong?
One morning I woke determined: I was going to suspend my faith in science. I was no longer going to trust my model. That day, instead of looking for the mutation with PCR, I decided to directly sequence the DNA of my mice. The results were clear-cut: no mutation.
Rage. Tears. More rage.
Yet, even in my fury, the beauty of the biology was not lost on me. My diseased mice had mutated back to a healthy wild-type form. The selection pressure against the aberrant DNA/protein was so strong that once a healthy clone emerged, it quickly replaced the diseased ones. Imagine! If only that could happen in patients!
This experience taught me a lot: How to cope with frustration. How to identify a model’s limitations. When to stop taking models for granted. I know that I am lucky: had I not found the DNA change then I would have continued my studies: trying to model a patient disease in healthy mice. Now, I’m only set back a month or two – as I wait for new ‘true’ model mice to arrive. In the meanwhile I have time to plan my upcoming experiments, collaborate with others and reflect on my good fortune.