Substack cometh, and lo it is good. (Pricing)

David Burbridge’s 10 questions for A. W. F. Edwards In 2006

A few years ago I watched a documentary about the rise of American-influenced rock music in Britain in the 1960s. At some point, one of the Beatles, probably Paul McCartney, or otherwise Eric Clapton, was quoted as saying that they wanted to introduce Americans to “their famous people.” Though patronizing and probably wrong, what they were talking about is that there were particular blues musicians who were very influential in some British circles were lingering in obscurity in the United States of America due to racial prejudice. The bigger picture is that there are brilliant people who for whatever reason are not particularly well known to the general public.

This is why I am now periodically “re-upping” interviews with scientists that we’ve done on this weblog over the past 15 years. These are people who should be more famous. But aren’t necessarily.

In 2006 David Burbridge, a contributor this weblog and a historian of things Galtonian, interviewed the statistical geneticist A. W. F. Edwards. Edwards was one of R. A. Fisher’s last students, so he has a connection to a period if history that is passing us by.

I do want to say that his book, Foundations of Mathematical Genetics, really gave me a lot of insights when I first read it in 2005 and began to be deeply interested in pop gen. It’s dense. But short. Additionally, I have also noticed that there is now a book out which is a collection of Edwards’ papers, with commentaries, Phylogenetic Inference, Selection Theory, and a History of Science. Presumably, it is like W. D. Hamilton’s Narrow Roads of Gene Land series. I wish more eminent researchers would publish these sorts of compilations near the end of their careers.

There have been no edits below (notice the British spelling). But I did add some links!

David’s interview begins after this point:

A. W. F. (Anthony) Edwards is one of Britain’s most distinguished geneticists. He studied genetics at Cambridge as one of the last students of R. A. Fisher, and like Fisher he has contributed actively to both genetics and statistics. In genetics his work includes several influential papers on the reconstruction of phylogenies, and a widely-read recent article on ‘Lewontin’s Fallacy’. In statistics he is known especially for his development and advocacy of the concept of Likelihood as a criterion for scientific inference. He has also made a notable contribution to combinatorial mathematics by finding a method of constructing Venn diagrams for any number of sets. In addition to many scientific papers, he has written four books: Likelihood (1972; expanded edition 1992); Foundations of Mathematical Genetics (1977; 2nd edition 2000); Pascal’s Arithmetical Triangle: the Story of a Mathematical Idea (1987; expanded edition 2002); and Cogwheels of the Mind: the Story of Venn Diagrams (2004). He has written extensively on the history of genetics, mathematics, and statistics, and has co-edited (with H. A. David) Annotated Readings in the History of Statistics (2001), and (with Milo Keynes and Robert Peel) A Century of Mendelism in Human Genetics (2004). He is also a champion glider pilot.

1. You were among the last students of R. A. Fisher. Can you share with us some reminiscences of him?

I first met Fisher in the summer of 1956 and had much contact with him until his death six years later. I saw him last during the Second Human Genetics Conference in Rome in 1961 and subsequently corresponded with him. I have published quite a few of my reminiscences of those times in a number of different places. I have been fortunate in having had a lot of contact with the older generation who knew him better than I did – people like Barnard, Bartlett, Finney, Yates, Race, Ruth Sanger and Bennett – and with members of his family, especially Rose, Harry and Joan (his biographer) amongst his children. Being a fellow of the same Cambridge college (Caius) as Fisher, though not at the same time, has meant daily contact with people who knew him well. But the most important thing is his science, and there everyone can get to know him through his writings, which reveal a mind of extraordinary power and vigour. That is the Fisher whom succeeding generations should learn about and admire.

2. Like Fisher you have worked in both statistics and genetics. How do you see the relationship between them, both in your own work and more generally?

In a sense I have benefitted from being an amateur in both fields so that I see no boundary between them. Though I qualified in genetics the subject almost immediately changed so radically through advances in molecular biology that most geneticists would not now regard me as one of themselves anyway. My generation thought genetics was the study of inheritance; theirs thinks it is the study of genes. As to statistics, I attended eight lectures by Henry Daniels in Cambridge but am otherwise self-taught, being hugely influenced by Fisher’s book Statistical Methods for Research Workers which he told me to buy (and then signed for me). Genetical statistics has changed fundamentally too: our problem was the paucity of data, especially for man, leading to an emphasis on elucidating correct principles of statistical inference. Modern practitioners have too much data and are engaged in a theory-free reduction of it under the neologism ‘bioinformatics’. We had to navigate by the stars; they have GPSs.

3. Much of your early work (some of it in collaboration with L. L. Cavalli-Sforza) was on methods of inferring phylogenies. How do you assess the progress in this field since the 1960s, and how have your own methods stood up to empirical tests?

All my work was in collaboration with Luca Cavalli-Sforza. It was his idea. He hired me to join his group in Pavia in Italy, not specifically to work on phylogenies but to apply the new-fangled computers to human genetics generally. The late delivery of the Olivetti computer was a blessing in disguise because it left us time to talk about what we would do with it when it came. I was initially sceptical because I knew that linkage was statistically difficult and here was Luca proposing what looked like linkage on a tree whose very shape also required estimating!

I think progress on the theoretical side has been incredibly slow, despite the best efforts of Joe Felsenstein, the leading practitioner. In a few months in 1962 and 1963 Luca and I thought up three ways of tackling the problem: least-squares on an additive tree (his), minimum evolution or parsimony (mine) and maximum-likelihood on a stochastic model (very much a joint effort). Forty-odd years on people are still arguing about the relative merits of the descendants of our methods when all along they should have been concentrating on refining the statistical approach through maximum-likelihood, which was our real contribution. Of course, from a practical point of view the computer packages have taken over in a development parallel to that in human genetics, from shakey inferences based on too little data and doubtful logic to computer algorithms trying to digest too much.

4. Your recent article on ‘Lewontin’s Fallacy’ criticises the claim that human geographical races have no biological meaning. As the article itself points out, it could have been written at any time in the last 30 years. So why did it take so long – and have you had any reactions from Lewontin or his supporters?

I can only speak for myself as to why it took me so long. Others closer to the field will have to explain why the penny did not drop earlier, but the principal cause must be the huge gap in communication that exists between anthropology, especially social anthropology, on the one hand, and the humdrum world of population and statistical genetics on the other. When someone like Lewontin bridges the gap, bearing from genetics a message which the other side wants to hear, it spreads fast – on that side. But there was no feedback. Others might have noticed Lewontin’s 1972 paper but I had stopped working in human and population genetics in 1968 on moving to Cambridge because I could not get any support (so I settled down to writing books instead). In the 1990s I began to pick up the message about only 15% of human genetic variation being between, as opposed to within, populations with its non-sequitur that classification was nigh impossible, and started asking my population-genetics colleagues where it came from. Most had not heard of it, and those that had did not know its source. I regret now that in my paper I did not acknowledge the influence of my brother John, Professor of Genetics in Oxford, because he was independently worrying over the question, inventing the phrase ‘the death of phylogeny’ which spurred me on.

Eventually the argument turned up unchallenged in Nature and the New Scientist and I was able to locate its origin. I only started writing about it after lunch one day in Caius during which I had tried to explain the fallacy across the table to a chemist, a physicist, a physiologist and an experimental psychologist – all Fellows of the Royal Society – and found myself faltering. I like to write to clear my mind. Then I met Adam Wilkins, the editor of BioEssays, and he urged me to work my notes up into a paper.

I have had no adverse reaction to it at all, but plenty of plaudits from geneticists, many of whom told me that they too had been perplexed. Perhaps the communication gap is
still too large, or just possibly the point has been taken. After all, Fisher made it in 1925 in Statistical Methods which was written for biologists so it is hardly new.

5. You have written several articles about Fisher’s Fundamental Theorem of Natural Selection. Following a groundbreaking reinterpretation by George Price in the early 1970s, it is now generally accepted that the theorem as intended by Fisher is valid, but some biologists would still question its practical use or importance. Can you explain in non-technical terms the meaning of the theorem, how the correct interpretation differs from earlier misunderstandings of it, and your own view on its biological importance?

Oh, it’s very simple. You must first recall the precise name of Fisher’s book in which it is the centrepiece: The Genetical Theory of Natural Selection. He is studying the mechanisms of natural selection from the point of view of populations regarded as aggregates of genes. Of course he knows, and stresses, that this is not the whole story. But to him selection’s defining effect is to change gene frequencies. He sees that this will only happen if there is variability in the survival rates of different genes.

Animal breeders promote artificial selection by imposing different ‘fitnesses’ on their stock according to desirability, breeding from some and not from others. They thus raise the mean value in the population of the character desired. Fisher saw that this process implicitly relies on a correlation between the character and fitness, so that progress will depend both on the magnitude of this correlation and the extent to which the character is genetically determined. What happens, he then asked, if we designate fitness itself as the character, making the correlation perfect? The answer is that the mean fitness of the genes will increase by an amount that depends on the extent to which fitness is determined by them. This is the fundamental theorem (in a modern paraphrase): ‘The rate of increase in the mean fitness ascribable to natural selection acting through changes in gene frequencies is equal to the additive genetic variance in fitness’.

The theorem does not involve the mean genotypic fitness – that is, the weighted mean of the fitnesses of the genotypes – which is where most interpreters of it went wrong. Fisher’s repeated denials that his theorem referred to the mean genotypic fitness, itself immortalised in Sewall Wright’s ‘adaptive landscapes’, went unheeded. In 1941 Fisher even published an example in which gene frequencies were changed under natural selection but the mean genotypic fitness stayed constant. Nobody noticed.

The brilliance of the fundamental theorem is not merely that it expresses the central dogma of natural selection – the connection between genetic variability and selective change – but that it does so exactly. Fisher discovered what the rate of change was proportional to: not to the total variance in fitness of the genotypes but only to that part of it found by fitting a weighted linear regression to the genotypic fitnesses. This is the part accounted for by the regression itself, the so-called additive genetic variance. Animal breeders know it as the variance of the breeding values of the genotypes. The fundamental theorem disregards the way the genes are distributed through the population, which will depend on the amount of heterosis in fitness, the extent of assortative mating, and similar possibly transient effects. What matters to it are the changes to the mean fitness brought about by changing gene frequencies.

This, then, is the theorem whose ‘practical use or importance’ ‘some biologists would still question’. Let them ask the animal breeders if it is any ‘use’, and let them ask themselves whether they think Darwin’s theory of evolution by natural selection is of any ‘importance’. If they do, then the fundamental theorem should help them to a deeper, Mendelian, understanding of it. If, however, they hanker after a theory that can make evolutionary predictions, like Wright’s adaptive landscapes were thought to do at one time, they are crying for the moon. Possession of the fundamental theorem will no more enable you to predict the flow of evolution than possession of Newton’s law of gravitation will enable you to predict the time of high tide at London Bridge.

It should not be forgotten, however, that shorn of its genetical complexities the theorem does have predictive power, just as the law of gravitation does when applied to the celestial movements that underlie the tides. ‘In a subdivided population the rate of change of the overall growth-rate is proportional to the variance in growth rates’. The ‘populations’ could be economic sectors, for example, or even one’s own savings accounts.

6. Your career since the 1950s spans the period in which computers, and off-the-shelf programs, have become widely available. Has this been an unmixed blessing, and do you think the development of statistics or genetics would have been very different if computers had been available in, say, 1900?

A mixed blessing of course, because the existence of programs hinders the development of the underlying theory. This is particularly true in statistics where, despite assertions to the contrary by Bayesians, the underlying theory is still a matter for discussion. The phenomenon can be seen in the field of phylogenetic trees, where programs based on different methods proliferate.

1900 is a peculiarly well-chosen date on which to hang the question. Not only was it the year in which Mendel’s results became widely known but it was also the year of the publication of the second edition of Karl Pearson’s The Grammar of Science, which included chapters on biological science for the first time.The Grammar of Science was hugely influential in its day, proclaiming that the function of science was ‘not to explain, but to describe by conceptual shorthand our perceptual experience’. ‘The man who classifies facts of any kind whatever, who sees their mutual relation and describes their sequences, is applying the scientific method’. The computer implementation of this sterile philosophy would have had a devastating effect, particularly on the development of statistical theory and the acceptance of Mendelism. All Pearson’s formidable energy would have been devoted to amassing vast quantities of information to be sifted for correlations. William Bateson’s 1894 six-hundred-page Materials for the Study of Variation treated with especial regard to Discontinuity in the Origin of Species would have been digitally scanned and computer programmers urged to uncover its secrets. It doesn’t bear thinking about!

7. In statistics you are especially known for developing and advocating the concept of Likelihood and its use in scientific inference. Can you explain how Likelihood differs from probability, and why Likelihood methods are useful in evaluating hypotheses?

Likelihood compares statistical hypotheses; it has nothing to say about a hypothesis on its own, like a test of significance does. Imagine two statistical hypotheses, each of which predicts the probabilities of all the possible outcomes of an experiment – which need be no more complex than tossing a biassed coin a number of times and counting the heads. The experiment is performed, the heads counted. Given this count, was the probability of heads p1 (the first hypothesis) or p2 (the second hypothesis)?

Now imagine doing the experiment lots of times assuming the first, and then the second, hypothesis. Would you not prefer the hypothesis that had the shorter expected waiting time until the exact number of heads observed turned up? If so, you have just chosen the one with the greater likelihood. The likelihood of a hypothesis is proportional to the probability of the data given the hypothesis. Meaningless for a hypothesis by itself because of the undefined constant of proportionality, with two hypotheses to be compared on the same data this constant is irrelevant, and the ratio of their likelihoods (or the difference in their log-likelihoods) becomes a measure of the support for one hypothesis versus the other.

Likelihoods therefore derive from probabilities, but unlike the latter are not additive. Whereas you can sum the probabilities of two possible outcomes of an experiment to form the probability of ‘either one or the other’, you cannot do the same for the likelihood of two hypotheses; ‘either one hypothesis or the other’ is not in itself a hypothesis enabling the probabilities of outcomes to be computed, so no likelihood for it is defined. But you can graph the likelihood as a function of p and pay special attention to its maximum, the maximum-likelihood estimate of the probability of heads.

The concept of the likelihood function is fundamental to all approaches to statistical inference, whether Bayesian, Neyman-Pearson, or Fisherian. Not everyone agrees that it is meaningful standing alone by itself, but I (and others before me) believe it is. Doubters can always fall back on the above ‘how long to wait’ argument, which I think was due to David Sprott.

8. You have written extensively on the history of genetics, statistics, and mathematics. Apart from the intrinsic interest of historical studies, how important do you think a knowledge of the history of science is for practising scientists?

I find it essential, and cannot imagine doing science without it. Much of what counts as science nowadays is rather theory-free. We don’t really have a word for it. Sequencing the human genome, for example, is a marvellous achievement relying on technical advances of great ingenuity but it did not require historical understanding. It differs intellectually from, say, the associated activity of trying to estimate linkage values between gene loci. The history of the latter, on which I have written recently, is an essential part of the study of the problem, and much modern work suffers from its neglect.

Celebrating the centenary of the publication of The Origin of Species in 1959, Fisher said: More attention to the History of Science is needed, as much by scientists as by historians, and especially by biologists, and this should mean a deliberate attempt to understand the thoughts of the great masters of the past, to see in what circumstances or intellectual milieu their ideas were formed, where they took the wrong turning or stopped short on the right track.
I agree.

9. R. A. Fisher was a keen eugenist. What are your own views on the role (if any) of eugenics in the modern world?

Fisher’s world was so different from ours, in three ways in particular. Then (say the period between the wars) nation-states were much more independent of each other so that it was possible to discuss population matters for Britain in relative isolation; secondly, it was a time of concern about the possibility of a declining home population; and thirdly many scientists were in the first flush of enthusiasm for the application of Mendelian principles – so recently elucidated – to man. None of this is true today.

For myself, though I was once a grateful holder of a Darwin Research Fellowship of the Eugenics Society (now the Galton Institute), since boyhood I have been more concerned about the quantity of people on earth rather than their quality. In the early 1960s I was a founder-member of a body called, I think, the Conservation Society, which does not seem to exist today. Its main platform was that too large a population would be unsustainable. At the time there was much discussion about over-population which was seen as one of the greatest dangers facing mankind. Interestingly, the worse the problem gets, the less it is discussed. Yet the mounting dangers we face, such as the possibility of global warming, are all exacerbated by too high a world population, given its enthusiasm for motor-cars, aeroplanes, and environmentally-damaging activity generally. It seems that people fear the charge of racism if they comment on population growth – they intuitively understand Fisher’s fundamental theorem.

10. Like yourself, your brother, J. H. Edwards, is also a distinguished geneticist. Nature, nurture, or sibling rivalry?

Well, certainly not sibling rivalry. It is true that we have been sufficiently alike at some stages of our lives to have been mistaken for each other. At the Rome Conference of Human Genetics in 1961 we were in a lift with the Swedish geneticist Jan Lindsten when he engagingly introduced us to another participant as ‘the two most confused brothers in genetics’.

In fact I am 7 1/2 years younger than John, and due to mother’s illness, father’s war service, and wartime privations generally, I hardly encountered him until the end of the war when I was ten. We developed boyhood enthusiasms for science quite independently, he for biology, me for astronomy. But there was a common factor in our education from thirteen to eighteen. We both attended Uppingham School, though of course not at the same time, and were exceptionally well-taught in science and mathematics, in some cases by the same teachers. I cannot stress this influence too strongly. Since I only went to Uppingham because my elder brother did, is that nature or nurture?

Subsequently John’s main influence was when I was learning about likelihood (see the preface to my book Likelihood). He, being medically qualified, keeps me straight on medical matters and I try to keep him straight on things statistical. I deliberately stayed off linkage theory so as not to get too close to his interests. John was more influenced by Lancelot Hogben and J. B. S. Haldane than I was. There is a wonderful letter from Fisher to R. R. Race in 1960 in which he refers to me as ‘my Edwards from Cambridge’ and to John as ‘only one of Hogben’s [pupils]’, so at least Fisher got us straight.

And John introduced me to gliding. Though not exactly a ‘champion’, to use your word, I have enjoyed fifty years gliding and hope for a few more yet.

3 thoughts on “David Burbridge’s 10 questions for A. W. F. Edwards In 2006

  1. … probably Paul McCartney, or otherwise Eric Clapton

    Yeah, I confuse those 2 all time too, just like AWF & JH Edwards.

  2. I agree with Edwards that the proliferation of data and computing power has led to a neglect of theory. It has only gotten worse since 2006. I have a suspicion that reviewers have not really read theory that I have developed in support of my methods. I have had reviewers ask me to perform simulations that amount to computing that 1+1=2, to demonstrate computationally what I have already demonstrated theoretically. Simulations are not a substitute for theory, especially when the simulations have been chosen to support pre-established conclusions of the authors. Theory can give you a much richer understanding of why a method works and in which scenarios it fails.

Comments are closed.