There has been a spate of blog articles in recent weeks by Michael Tobis, And Then There’s Physics Victor Venema and Eli Rabett. In part this was sparked by an article in the New York Times misleadingly titled There is no Scientific Method. I’m far from a philosopher of science, so what follows is not rigorous or complete, it’s just some idiosyncratic and random stuff I’ve picked up along the way.
I’ve never been much of a method person. I work haphazardly, with an untidy desk and a short attention span, going from uncompleted thought to thought. When I read a scientific paper, I jump all over the place, starting with the conclusions, then trying to decipher the diagrams, looking at the supplementary material, then, as a last resort, I’ll read the introduction and abstract.
To be sure, I’m eventually capable of providing a logical and orderly explanation of my thinking, but it’s a fiction that describes the shortest path between question and answer, rather than describing the random walk I had been on.
I was always thrown off-balance during my business career if, when I was proposing a project idea, somebody would ask “what’s the process here?”, mainly because I had no clue how to answer. Being forced to attend meetings that contain flowcharts of process, with diamond-shaped, rectangular and elliptical boxes was a torture. My world didn’t work like that and never will. Process always seemed to me to be a barrier to getting things done, often promoted by people who never contributed much in the way of new ideas.
Nevertheless, I did take some early interest in the philosophy of science. The first eye-opener was reading Bryan Magee’s book on Karl Popper, a very clear and short description of Popper’s epistemology. The lack of symmetry between proof (impossible) and falsification (sometimes possible) came as a minor revelation. Popper’s demarcation between science (falsifiable) and non-science (not falsifiable) struck me as reasonable, although I have later come to realize that the world of knowledge is a lot more fuzzy than that. Nevertheless, like the famous eroticism/pornography demarcation, you usually know it when you see it.
Falsifiability was not just an abstraction. It was a benefit to recognize that tests and experiments should be set up to disprove a theory rather than provide supporting information. In geophysics, most problems do not have a unique solution and it’s no use labouring to make an elaborate model that fits the observations exactly. The real benefit of modelling is to find out what can’t work, to provide brackets on the range of possible solutions. Not everyone gets this.
In the real world of science, the key falsifying/corroboration experiment is often a bit of yawn rather than a grand celebration of empiricism. Can anyone remember when they successfully applied VLBI or GPS technology to measure the relative movements of the continents or who the scientists were who did it? I didn’t think so. The plate tectonics train had long left the station.
Even in business, there were Popper applications. For example, after drilling a successful exploration well, appraisal wells would usually be required to see if the discovery was big enough to be economic. Rather than drill two wells, you could sometimes instead drill the appraisal well first, down from the very top of the structure, in order to determine the viability of the prospect with just a single well. The idea being that you drilled a well, not to prove an oil accumulation, but to disprove the presence of an economic one. The potential money savings are obvious. However, a well located on the top would have more chance of being declared a success, resulting in a press release and champagne. Consequently, first drilling a more risky well on the flank of a structure faced resistance from management and colleagues. Of course, the process-people would point to their flowcharts which would show, first you drill the discovery well, then you drill an appraisal well, which didn’t help.
My scientific career started during the late stages of the plate tectonic revolution. The time was ripe to read Thomas Kuhn’s Structure of Scientific Revolutions. The elements of sociology he introduced made sense of the dynamics that really went on in scientific meetings, where people argued passionately and, sometimes, angrily. They would form factions and gossip about each other in the pub. Often, opposed groups would talk past each other, which was perhaps a manifestation of Kuhn’s incommensurability of competing ideas.
Although Kuhn provided some insights into how science worked among us hyper-social animals, there was nothing there, unlike Popper’s model, that guided me to do research better.
Kuhn also explained how most of science was concerned with problem solving within an established paradigm, rather than the kind of revolutionary research done by a Galileo, a Darwin or an Einstein. This was obviously true, but, to an ambitious young guy, it was a bit depressing too, since it indicated that a successful scientific career would be much more likely taken up in replacing the grout between the stones in the wall rather than tearing down and building a new citadel.
Kuhn let the sociologists in to sit at the table previously occupied by philosophers and elderly scientists. This was necessary because modern science really is a social activity, but it had the side-effect, at least for innocents like me, in rendering much of the ensuing meta-discourse about science incomprehensible.
There was something unsatisfying about the philosophies of science that I had read. For one thing, they focussed too much on astronomy and physics, and didn’t really deal with what went on in messier subjects like geology and biology. Also, these philosophies felt a bit like reading the rules of football and expecting to learn how to actually play the game. That changed when I read Paul Feyerabend’s Against Method (free pdf here).
Feyerabend’s close look at history, particularly at Galileo, revealed how much the Popperian and other rationalist accounts of scientific progress were, at worst, just-so stories. There is not one method that fits all problems. Galileo advanced his case with ad hoc glossings-over of problematic areas in his ideas and he employed what Feyerabend, with his characteristic hyperbole, termed propaganda, but was really just salesmanship.
“Anything goes” is, of course, a provocation and Feyerabend says that it is not a principle but “the terrified reaction of a rationalist who takes a closer look at history”.
Feyerabend’s epistemology, such as it is, appeals to me partly because it provides cover for my own aversion to process and my untidy desk. Admittedly, it’s not much use in guiding a budding scientist in effective methods, nor does it provide any means of demarcating science from non-science. Far from providing a sociological framework for science, Feyerabend argues that any attempt to erect such a model is probably futile, given the extraordinary diversity of scientific topics.
Working scientists are often shy about expounding about how science works: I suspect that many of them know that it’s a little foolish to attempt to generalize about methods. On the other hand, Internet pundits, who may have never published a science paper in their lives, are quick to instruct actual scientists in how the process works and glibly quote Karl Popper and Richard Feynman out of context. They are like back-seat passengers who bore mechanics, engineers and drivers with details about how a car really works.
Because scientists are involved in a social process and because they share a lot of genes with chimpanzees, what actually gets said in the common rooms may shock those with delicate sensibilities who have bought in to the ideas of scientific objectivity and detachment.
Climategate, of course, provided the public with a peek inside the sausage factory. The private emails made public showed scientists occasionally to be gossipy, competitive and sometimes a little sloppy in their use of terminology among their friends and colleagues. Numerous formal investigations revealed no scientific misconduct at all. The scientists were however found guilty of behaving, in private, as if they were people.
The shock and horror expressed by some journalists and commentators, who really should have known better, reminded me of the, possibly apocryphal, tale of the Victorian art critic John Ruskin who was unable to consummate his marriage once he discovered, to his disgust, that adult women had pubic hair. This natural feminine feature was absent in the classical nudes he had studied so intensively in the museums. At least Feyerabend did us all a service in depicting scientific history, unwaxed.
Anarchy can be refreshing and liberating, but as we return to Earth, we still have to figure out how to recognize reliable knowledge (John Ziman’s term, I think) from the rest.
How can we know what is true?
The short answer, putting on my geophysicist’s hat, is that if you want to discern the signal in the noise, stack all the traces of evidence and hope that the noise is random enough to cancel out. Consensus, in other words, is what you are left with. A longer answer follows.
The question in the subtitle is ill-posed, or at least ill-defined.
Firstly, we need to define who “we” are. Some of us are scientifically trained, most are not, but we all need some method to sort good information from bad. The scope of modern science is so broad and the supporting structure so complex, that even the specialists in any field today have to take much of it on trust.
The Royal Society’s motto nullius in verba (take no one’s word for it) harks back to a time when scientists could reproduce experiments on a lab bench with relative ease, or could point their own telescopes at the sky. To be sure, some replications of observations are easier now than then: if you wanted in the old days to check out Darwin’s claims about finches in the Galapagos, for example, you couldn’t just go and buy an air ticket to Ecuador.
It would be perverse today to withhold acceptance of the results of the Large Hadron Collider experiments until you can replicate them for yourself. The many publications of the ATLAS project have lists of hundreds of authors; in this one, for example, the contributors range from Aaboud to Zwalinski. How many of these co-authors can vouch for the conclusions of this paper? How many of them have even read it? What are the rest of us supposed to do?
Secondly, “true” is perhaps too strong a word and risks objections from pedants who will immediately list the established truths that were later found to be untrue. I prefer Ziman’s “reliable knowledge”, but I’ll happily talk about “facts” and “truth”, among friends at least.
The experts in any field don’t need any help or shortcuts. They attend conferences, gossip with peers, read the literature a little outside of their speciality and often teach broader aspects of their subject using textbooks. The consensus, mostly, is what they teach to their undergraduate students. Their research focusses on the fringes of consensus or on problems within it, à la Kuhn. Published papers that are wrong get ignored and are rarely formally rebutted. Peer-reviewed rebuttal papers take effort, rarely get cited and don’t help much with your H-index, so they tend not to get written. From the prospect of a non-specialist outsider, this is not satisfactory.
My friend Peter Jacobs has a nice lecture on Knowledge-Based Consensus, a concept which goes beyond a simple show of hands. Knowledge-Based Consensus involves:
- Consilience of evidence (independent lines of evidence pointing to the same result);
- Social calibration (agreement on basic concepts and methods);
- Social diversity (agreement among researchers coming from different backgrounds and perspectives).
Watch it, rather than have me describe it any further.
If the three criteria of Knowledge-Based Consensus are satisfied, we can have confidence that the scientific knowledge in a certain area is reliable.
It works, bitches
We may not be able to draw a nice process diagram for how a scientific idea emerges and gains credibility. It’s probably not possible to draw a bright line between science and non-science, besides, the problems in the fuzzy area are often the most interesting. We can’t generalize about the sociological interactions that take place among the diverse individuals and teams who are involve in tackling scientific problems of very different kinds. “Truth” is ever-elusive in theory, yet we can still get a grip on it in practice. You don’t need a map to explore new territory.
The search for some kind of general theory of what science is, and how it works, continues. Meanwhile, the scientists just get on with the job.
The tried-and-trusted fall-back of any exasperated scientist is simply to declare that, whatever the philosophers and post-modernists might say, it works.
This anecdote is almost certainly apocryphal: an Irish minister is supposed to have asked of his economic advisor: “I understand how that new policy works in practice, but can you please explain to me how it works in theory?”