Sunday, June 29, 2014

The Inverse Problem. Sorry, we don’t have experimental evidence for quantum gravity.

The BICEP collaboration recently reported the first measurements of CMB polarization due to relic gravitational waves. It is presently unclear whether their result will hold up and eventually be confirmed by other experimental groups, or if they have screwed up their data analysis and there’s no signal after all. Ron Cowen at Nature News informs the reader carefully there’s “No evidence for or against gravitational waves.”

I’m not an astrophysicist and don’t have much to say about the measurement, but I have something to say about what this measurement means for quantum gravity. Or rather, what it doesn’t mean.

I keep coming across claims that BICEP is the first definite experimental evidence that gravity must be quantized. Peter Woit eg let us know that Andrew Strominger and Juan Maldacena cheerfully explain that quantum gravity is now an experimental subject, and Lawrence Krauss recently shared his “Viewpoint” in which he also writes that the BICEP results imply gravity must be quantized:

“Research by Frank Wilczek and myself [Lawrence Krauss], based on a dimensional analysis argument, suggests that, independently of the method used to calculate the gravitational wave spectrum, a quantum gravitational origin is required. If so, the BICEP2 result then implies that gravity is ultimately a quantum theory, a result of fundamental importance for physics.”
We previously discussed the argument by Krauss and Wilczek here. In a nutshell the problem is that one can’t conclude anything from and with nothing, and no conclusion is ever independent of the assumptions.

The easiest way to judge these claims is to ask yourself: What would happen if the BICEP result does not hold up and other experiments show that the relic gravitational wave background is not where it is expected to be?

Let me tell you: Nobody working on quantum gravity would seriously take this to mean gravity isn’t quantized. Instead, they’d stumble over each other trying to explain just how physics in the early universe is modified so as to not leave a relic background measureable today. And I am very sure they’d come up with something quickly because we have very little knowledge about physics in such extreme conditions, thus much freedom to twiddle with the theory.

The difference between the two situations, the relic background being confirmed or not confirmed, is that almost everybody expects gravity to be quantized, so right now everything goes as expected and nobody rushes to come up with a way to produce the observed spectrum without quantizing gravity. The difference between the two situations is thus one of confirmation bias.

What asking this question tells you then is that there are assumptions going into the conclusion other than perturbatively quantizing gravity, assumptions that quickly will be thrown out if the spectrum doesn’t show up as expected. But this existence of additional assumptions also tells you that the claim that we have evidence for quantum gravity is if not wrong then at least very sloppy.

What we know is this: If gravity is perturbatively quantized and nothing else happens (that’s the extra assumption) then we get a relic gravitational wave spectrum consistent with the BICEP measurement. This statement is equivalent to the statement that no relic gravitational wave spectrum in the BICEP range implies no perturbative quantization of gravity as long as nothing else happens. The conclusion that Krauss, Wilczek, Strominger, Maldacena and others would like to draw is however that the measurement of the gravitational wave spectrum implies that gravity must be quantized, leaving aside all other assumptions and possibly existing alternatives. This statement is not logically equivalent to the former. This non-equivalence is sometimes referred to as “the inverse problem”.

The inverse problem is finding the theory from the measurements, the inverse of calculating the data from the theory. Strictly speaking it is impossible to pin down the theory from the measurements since this would imply ruling out all alternative options but one, and there might always be alternative options - unknown unknowns - that we just did not think of. In practice then solving the inverse problem means to rule out all known alternatives. I don’t know of any alternative that has been ruled out.

The so-far best attempt at ruling out classical gravity is this paper by Ashoorioon, Dev, and Mazumdar. They show essentially that it’s the zero-point fluctuations of the quantized metric that seed the relic gravitational waves. Since a classical field doesn’t have these zero-point fluctuations, this seed is missing. Using any other known matter field with standard coupling as a seed would give a too small amplitude; this part of the argument is basically the same argument as Krauss’ and Wilczek’s.

There is nothing wrong with this conclusion, except for the unspoken words that “of course” nobody expects any other source for the seeds of the fluctuation. But you have practice now, so try turning the argument around: If there was no gravitational wave background at that amplitude, nobody would argue that gravity must be classical, but that there must be some non-standard coupling or some other seed, ie some other assumption that is not fulfilled. Probably some science journalist would call it indirect evidence for physics beyond the standard model! Neither the argument by Ashoorioon et al nor Krauss and Wilczek’s for example has anything to say about a phase-transition from a nongeometrical phase that might have left some seeds or some other non-perturbative effect.

There are more things in heaven and earth, Horatio, than annihilation and creation operators for spin-two fields.

The argument by Krauss and Wilczek uses only dimensional analysis. The strength of their argument is its generality, but that’s also its weakness. You could argue on the same grounds for example that the electron’s mass is evidence for quantum electrodynamics because you can’t write down a mass-term for a classical field without an hbar in it. That is technically correct, but it’s also uninsightful because it doesn’t tell us anything about what we actually mean with quantization, eg the commutation relations between the field and its conjugate. It’s similar with the Krauss’ and Wilczek argument. They show that, given there’s nothing new happening, you need an hbar to get the dimensions work out. This is correct but in and by itself doesn’t tell you what the hbar does to gravity. The argument by Ashoorioon et al is thus more concrete, but on the flipside less widely applicable.

Don’t get me wrong there. I have no reason to doubt that perturbatively quantized gravity is the right description at weak coupling, and personally I wouldn’t want to waste my time on a theory that leaves gravity unquantized. But the data’s relevance for quantum gravity is presently being oversold. If the BICEP result vanishes you’d see many people who make their living from quantum gravity backpedal very quickly.

Monday, June 23, 2014

What I do... [Video]

What I do when I don't do what I don't do. I'm the short one. The tall one is our soon-to-be-ex director Lárus Thorlacius, also known as the better half of black hole complementarity. Don't worry, I'm not singing.

Sunday, June 22, 2014

Book review: “The Island of Knowledge” by Marcelo Gleiser

The Island of Knowledge: The Limits of Science and the Search for Meaning
By Marcelo Gleiser
Basic Books (June 3, 2014)

In May 2010, I attended a workshop at Perimeter Institute on “The Laws of Nature: Their Nature and Knowability,” one of the rare events that successfully mixed physicists with philosophers. My main recollection of this workshop is spending half of it being sick in the women’s restroom, which probably had less to do with the philosophers and more with me being 4 weeks pregnant. Good thing that I’m writing a blog to remind me what else happened at this workshop, for example Marcelo Gleiser’s talk “What can we know of the world?” about which I wrote back then. Audio and Video here.

The theme of Gleiser’s 2010 talk – the growth of human knowledge through scientific inquiry and the limits of that growth – are content of his most recent book “The Island of Knowledge”. He acknowledges having had the idea for this book at the 2010 PI workshop, and I can see why. Back then my thought about Gleiser’s talk was that it was all rather obvious, so obvious I wasn’t sure why he was invited to give a talk to begin with. Also, why was I not invited to give a talk, mumble grumble grr, bathroom break.

Surprisingly though, many people in attendance had arguments to pick with Gleiser following his elaborations. That gave me something to think; apparently even practicing scientists don’t all agree on the purpose of scientific inquiry, certainly not philosophers, and the question what we do science for, and what we can ultimately expect to know, make a good topic for a popular-science book.

Gleiser’s point of view about science is pragmatic and process oriented, and I agree with pretty much everything in his book, so I am clearly biased to like it. Science is a way to construct models about the world and to describe our observations. In that process, we collect knowledge about Nature but this knowledge is necessarily limited. It is limited by the precision to which we can measure, and quite possibly limited also by the laws of nature themselves because they may fundamentally prevent us from expanding our knowledge.

The limits that Gleiser discusses in “The Island of Knowledge” are the limits set by the speed of light, by quantum uncertainty, Godel’s incompleteness theorems, and the finite capacity of the human brain that ultimately limits what we can possibly understand. Expanding on these topics, Gleiser guides the reader through the historical development of the scientific method, the mathematical description of nature, cosmology, relativity and quantum mechanics. He makes detours through alchemy and chemistry and ends with his thoughts on artificial intelligence and the possibility that we live in a computer simulation. Along the way, he discusses the multiverse and the quest for a theory of everything (the latter apparently topic of this previous book “A Tear at the Edge of Creation” which I haven’t read).

Since we can never know what we don’t know, we will never know whether our models are indeed complete and as good as can be, which is why the term “Theory of Everything”, when taken literally, is unscientific in itself. We can never know whether a theory is indeed a theory of everything. Gleiser is skeptic of the merits of the multiverse, which “stretch[es] the notion of testability in physics to the breaking point” and which “strictly speaking is untestable” though he explains how certain aspects of the multiverse (bubble collisions, see my recent post on this) may result in observable consequences.

Gleiser on the one hand is very much the pragmatic and practical scientist, but he does not discard philosophy as useless either, rather he argues that scientists have to be more careful about the philosophical implications of their arguments:
“[A]s our understanding of the cosmos has advanced during the twentieth century and into the twenty-first, scientists – at least those with cosmological and foundational interests – have been forced to confront questions of metaphysical importance that threaten to compromise the well-fortified wall between science and philosophy. Unfortunately, this crossover has been done for the most part with carelessness and conceptual impunity, leading to much confusion and misapprehension.”

Unfortunately, Gleiser isn’t always very careful himself. While he first argues that there’s no such thing as scientific truth because “The word “true” doesn’t have much meaning if we can’t ever know what is,” he later uses expressions like “the laws of Nature aim at universality, at uncovering behaviors that are true”. And his take on Platonism seems somewhat superficial. Most readers will probably agree that “mathematics is always an approximation to reality and never reality as it is” and that Platonism is “a romantic belief system that has little to do with reality”. Alas we do not actually know that this is true. There is nothing in our observations that would contradict such an interpretation, and for all I know we can never know whether it’s true or not, so I leave it to Tegmark to waste his time on this.

Gleiser writes very well. He introduces the necessary concepts along the way, and is remarkably accurate while using a minimum of technical details. Some anecdotes from his own research and personal life are nicely integrated with the narrative and he has a knack for lyrical imagery which he uses sparsely but well timed to make his points.

The book reads much better than John Barrows 1999 book “Impossibility: The Limits of Science and the Science of Limits” which took on a similar theme. Barrow’s book is more complete in that it also covers economical and biological limits, and more of what scientists presently believe is impossible and why, for example timetravel, complexity and the possible future developments of science which Gleiser doesn’t touch upon. But Barrow’s book also suffers from being stuffed with all these topics. Gleiser’s book aims less at being complete, he clearly leaves out many aspects of “The Limits of science and the Search for Meaning” as the book subtitle promises, but in his selectiveness Gleiser gets across his points much better. Along the way, the reader learns quite a bit about cosmology, relativity and quantum mechanics, prior knowledge not required.

I would recommend this book to anybody who wants to know where the current boundaries of the “Island of Knowledge” are, seen from the shore of theoretical physics.

[Disclaimer: Free review copy]

Sunday, June 15, 2014

Evolving dimensions, now vanishing

Vanishing dimensions.
Technical sketch.
Source: arXiv:1406.2696 [gr-qc]

Some years ago, we discussed the “Evolving Dimensions”, a new concept in the area of physics beyond the standard model. The idea, put forward by Anchordoqui et al in 2010, is to make the dimensionality of space-time scale-dependent so that at high energies (small distances) there is only one spatial dimension and at small energies (large distances) the dimension is four, or possibly even higher. In between – in the energy regime that we deal with in everyday life and most of our experiments too – one finds the normal three spatial dimensions.

The hope is that these evolving dimensions address the problem of quantizing gravity, since gravity in lower dimensions is easier to handle, and possibly the cosmological constant problem, since it is a long-distance modification that becomes relevant at low energies.

One of the motivations for the evolving dimensions is the finding that the spectral dimension decreases at high energies in various approaches to quantum gravity. Note however that the evolving dimensions deal with the actual space-time dimension, not the spectral dimension. This immediately brings up a problem that I talked about to Dejan Stojkovic, one of the authors of the original proposal, several times, the issue of Lorentz-invariance. The transition between different numbers of dimensions is conjectured to happen at certain energies: how is that statement made Lorentz-invariant?

The first time I heard about the evolving dimensions was in a talk by Greg Landsberg at our 2010 conference on Experimental Search for Quantum Gravity. I was impressed by this talk, impressed because he was discussing predictions of a model that didn’t exist. Instead of a model for the spacetime of the evolving dimensions, he had an image of yarn. The yarn, you see, is one-dimensional , but you can knit it to two-dimensional sheets, which you can then form to a three-dimensional ball, so in some sense the dimension of the yarn can evolve depending on how closely you look. It’s a nice image. It is also obviously not Lorentz-invariant. I was impressed by this talk because I’d never have the courage to give a talk based on a yarn image.

It was the early days of this model, a nice idea indeed, and I was curious to see how they would construct their space-time and how it would fare with Lorentz-invariance.

Well, they never constructed a space-time model. Greg seems not to have continued working on this, but Dejan is still on the topic. A recent paper with Niayesh Afshordi from Perimeter Institute still has the yarn in it. The evolving dimensions are now called vanishing dimensions, not sure why. Dejan also wrote a review on the topic, which appeared on the arxiv last week. More yarn in that.

In one of my conversations with Dejan I mentioned that the Causal Set approach makes use of a discrete yet Lorentz-invariant sprinkling, and I was wondering out aloud if one could employ this sprinkling to obtain Lorentz-invariant yarn. I thought about this for a bit but came to the conclusion that it can’t be done.

The Causal Set sprinkling is a random distribution of points in Minkowski space. It can be explicitly constructed and shown to be Lorentz-invariant on the average. It looks like this:

Causal Set Sprinkling, Lorentz-invariant on the average. Top left: original sprinkling. Top right: zoom. Bottom left: Boost (note change in scale). Bottom right: zoom to same scale as top right. The points in the top right and bottom right images are randomly distributed in the same way. Image credits: David Rideout. [Source]

The reason this discreteness is compatible with Lorentz-invariance is that the sprinkling makes use only of four-volumes and of points, both of which are Lorentz-invariant, as opposed to Lorentz-covariant. The former doesn’t change under boosts, the latter changes in a well-defined way. Causal Sets, as the name says, are sets. They are collections of points. They are not, I emphasize, graphs – the points are not connected. The set has an order relation (the causal order), but a priori there are no links between the points. You can construct paths on the sets, they are called “chains”, but these paths make use of an additional initial condition (eg an initial momentum) to find a nearest neighbor.

The reason that looking for the nearest neighbor doesn’t make much physical sense is that the distance to all points on the lightcone is zero. The nearest neighbor to any point is almost certainly (in the mathematical sense) infinitely far away and on the lightcone. You can use these neighbors to make the sprinkling into a graph. But now you have infinitely many links that are infinitely long and the whole thing becomes space-filling. That is Lorentz-invariant of course. It is also in no sensible meaning still one-dimensional on small scales. [Aside: I suspect that the space you get in this way is not locally identical to R^4, though I can’t quite put my finger on it, it doesn’t seem dense enough if that makes any sense? Physically this doesn’t make any difference though.]

So it pains me somewhat that the recent paper of Dejan and Niayesh tries to use the Causal Set sprinkling to save Lorentz-invariance:

“One may also interpret these instantaneous string intersections as a causal set sprinkling of space-time [...] suggesting a potential connection between causal set and string theory approaches to quantum gravity.”

This interpretation is almost certainly wrong. In fact, in the argument that their string-based picture is Lorentz-invariant they write:
“Therefore, on scales much bigger than the inverse density of the string network, but much smaller than the size of the system, we expect the Lorentz-invariant (3+1)-dimensional action to emerge.”
Just that Lorentz-invariance which emerges at a certain system size is not Lorentz-invariant.

I must appear quite grumpy going about and picking on what is admittedly an interesting and very creative idea. I am annoyed because in my recent papers on space-time defects, I spent a considerable amount of time trying to figure out how to use the Causal Set sprinkling for something (the defects) that is not a point. The only way to make this work is to use additional information for a covariant (but not invariant) reference frame, as one does with the chains.

Needless to say, in none of the papers on the topic of evolving, vanishing, dimensions one finds an actual construction of the conjectured Lorentz-invariant random lattice. In the review, the explanation reads as follows: “One of the ways to evade strong Lorentz invariance violations is to have a random lattice (as in Fig 5), where Lorentz-invariance violations would be stochastic and would average to zero...” Here is Fig 5:

Fig 5 from arXiv:1406.2696 [gr-qc]

Unfortunately, the lattice in this proof by sketch is obviously not Lorentz-invariant – the spaces are all about the same size, which is a preferred size.

The recent paper of Dejan Stojkovic and Niayesh Afshordi attempts to construct a model for the space-time by giving the dimensions a temperature-dependend mass, so that, as temperatures drop, additional dimensions open up. This begs the question though, temperature of what? Such an approach might make sense maybe in the early universe, or when there is some plasma around, but a mean field approximation clearly does not make sense for the scattering of two asymptotically free states, which is one of the cases that the authors quote as a prediction. A highly energetic collision is supposed to take place in only two spatial dimensions, leading to a planar alignment.

Now, don’t get me wrong, I think that it is possible to make this scenario Lorentz-invariant, but not by appealing to a non-existent Lorentz-invariant random lattice. Instead, it should be possible to embed this idea into an effective field theory approach, some extension of asymptotically safe gravity, in which the relevant scale that is being tested then depends on the type of interaction. I do not know though in which sense these dimensions then still could be interpreted as space-time dimensions.

In any case, my summary of the recent papers is that, unsurprisingly, the issue with Lorentz-invariance has not been solved. I think the literature would really benefit from a proper no-go theorem proving what I have argued above, that there exist no random lattices that are Lorentz-invariant on the average. Or otherwise, show me a concrete example.

Bottomline: A set is not a graph. I claim that random graphs that are Lorentz-invariant on the average, and are not space-filling, don’t exist in (infinitely extended) Minkowski space. I challenge you to prove me wrong.

Monday, June 09, 2014

Is Science the only Way of Knowing?

Fast track to wisdom: It isn’t.

One can take scientism too far. No, science is not “true” whether or not you believe in it, and science is not the only way of knowing, in no sensible definition of the words.

Unfortunately, the phrase “Science is not the only way of knowing” has usually been thrown at me, triumphantly, by various people in defense of their belief in omnipotent things or other superstitions. And I will admit that my reflex is to say you’ll never know anything unless it’s been scientifically proved to be correct, to some limited accuracy with appropriate error bars.

So I am writing this blogpost is to teach myself to be more careful in defense of science, and to acknowledge that other ways of knowing exist, though they are not, as this interesting list suggests, LSD, divination via oujia boards, Shamanic journeying, or randomly opening the Bible and reading something.

Before we can argue though, we have to clarify what we mean with science and knowledge.

The question “What is science?” has been discussed extensively and, philosophers being philosophers, I don’t think it will ever be settled. Instead of defining science, let me therefore just describe it in a way that captures reality very well: Science is what scientists do. Scientists form a community of practice that shares common ethics, ethics that aren’t typically written down, which is why defining science proper is so difficult. These common ethics are what is usually referred to as the scientific method, the formulation of hypothesis and the test against experiment. Science, then, is the process that this community drives.

This general scientific method, it must be emphasized, is not the only shared ethics in the scientific community. Parts of the community have their own guidelines for good scientific conduct, that are additional procedures and requirements which have shown to work well in advancing the process of finding and testing good hypotheses. Peer review is one such added procedure, guidelines for statistical significance or the conduct of clinical trials are others. While the scientific method does not forbid it, random hypothesis will generally not even be considered because of their low chances of success. Instead, a new hypothesis is expected to live up to the standards of the day. In physics this means for example that your hypothesis must meet high demands on mathematical consistency.

The success of science comes from the community acting as an adaptive system on the development of models of nature. There is a variation (the formulation of new hypothesis), a feedback (test of the hypothesis) and a response (discard, keep, or amend). This process of arriving at increasingly successful scientific theories is not unlike natural selection that results in increasingly successful life forms. It’s just that in science the variation in the pool of ideas is stronger regulated than the variation in the pool of genes.

That brings us to the question what we mean with knowledge. Strictly speaking you never know anything, except possible that you don’t know anything. The problem is not in the word ‘knowing’ but in the word ‘you’ – that still mysterious emergent phenomenon built of billions of interacting neurons. It takes but a memory lapse or a hallucination and suddenly you will question whether reality is what it seems to be. But let me leave aside the shortcomings of human information processing and the fickle notion of consciousness and knowledge becomes the storage of facts about reality, empirical facts.

You might argue that there are facts about fantasy or fiction, but the facts that we have about them are not facts about these fictions but about the real representations of that fiction. You do not know that Harry Potter flew on a broom, you know that a person called Rowling wrote about a boy called Harry who flew on broom. In a sense, everything you can imagine is real, provided that you believe yourself to be real. It is real as a pattern in your neural activity, you just have to be careful then in stating exactly what it is that you “know”.

Let us call knowledge “scientific knowledge” if it was obtained by the scientific method applied by what we refer to as scientists’ methods in the broader sense. Science is then obviously a way to arrive at knowledge, but it is also obviously not the only way. If you go out on the street, you know whether it is raining. You could make this into a scientific process with a documented random controlled trial and peer reviewed statistical analysis, but nobody in their right mind would do this. The reason is that the methods used to gather and evaluate the data (your sensorial processing) are so reliable most people don’t normally question them, at least not when sober.

This is true for a lot of “knowledge”, that you might call trivial knowledge, for example you know how to spell “knowledge”. This isn’t scientific knowledge, it’s something you learned in school together with hundreds of millions of other people, and you can look it up in a dictionary. You don’t formulate the spelling as a hypothesis that you test against data because there is very little doubt about it in your mind and in anybody’s mind. It isn’t an interesting hypothesis for the scientific community to bother with.

That then brings us to the actually interesting question, whether there is non-trivial knowledge that is not scientific knowledge. Yes, there is, because science isn’t the only procedure in which hypothesis are formulated and tested against data. Think again of natural selection. The human brain is pretty good for example at extrapolating linear motion or the trajectories of projectiles. This knowledge seems to be hardwired, even infants have it, and it contains a fair bit of science, a fair bit of empirical facts: Balls don’t just stop flying in midair and drop to the ground. You know that. And this knowledge most likely came about because it was of evolutionary advantage, not because you read it in a textbook.

Now you might not like to refer to it as knowledge if it is hardwired, but similar variation and selection processes take place in our societies all the time outside of science. Much of it is know-how, handcrafts, professional skills, or arts, that are handed down through generations. We take expert’s advice seriously (well, some of us, anyway) because we assume they have undergone many iterations of trial and error. The experts, they are not of course infallible, but we have good reason to expect their advice to be based on evidence that we call experience. Expert knowledge is integrated knowledge about many facts. It is real knowledge, and it is often useful knowledge, it just hasn’t been obtained in an organized and well-documented way as science would require.

You can count to this non-scientific knowledge for example also the knowledge that you have about your own body and/or people you live together with closely. This is knowledge you have gathered and collected over a long time and it is knowledge that is very valuable for your doctor should you need help. But it isn’t knowledge that you find in the annals of science. It is also presently not knowledge that is very well documented, though with all the personalized biotracking this may be changing.

Now these ways of knowing are not as reliable as scientific knowledge because they do not live up to the standards of the day – they are not carefully formulated and tested hypothesis, and they are not documented in written reports. But this doesn’t mean they are no knowledge at all. When your grandma taught you to make a decent yeast dough, the recipe hadn’t come from a scientific journal. It had come through countless undocumented variations and repetitions, hundreds of trials and errors – a process much like science and yet not science.

And so you may know how to do something without knowing why this is a good way to do it. Indeed, it is often such non-scientific knowledge that leads to the formulation of interesting hypotheses that confirm or falsify explanations of causal relations.

In summary: Science works by testing ideas against evidence and using the results as feedbacks to improvements. Science is the organized way of using this feedback loop to increase our knowledge about the real world, but it isn’t the only way. Testing ideas against reality and learning from the results is a process that is used in many other areas of our societies too. The knowledge obtained in this way is not as reliable as scientific knowledge, but it is useful and in many cases constitutes a basis for good scientific hypotheses.

Sunday, June 01, 2014

Are the laws of nature beautiful?

Physicists like to talk about the beauty and elegance of theories, books have been written about the beautiful equations, and the internet, being the internet, offers a selection of various lists that are just a Google search away.

Max Tegmark famously believes all math is equally real, but most physicists are pickier. Garrett Lisi may be the most outspoken example who likes to say that the mathematics of reality has to be beautiful. Now Garrett’s idea of beautiful is a large root diagram which may not be everybody’s first choice, but symmetry is a common ingredient to beauty.

Physicists also like to speak about simplicity, but simplicity isn’t useful as an absolute criterion. The laws of nature would be much simpler if there was no matter or if symmetries were never broken or if the universe was two dimensional. But this just isn’t our reality. As Einstein said, things should be made as simple as possible but not any simpler, and that limits the use of simplicity as guiding principle. When simplicity reaches its limits, physicists call upon beauty.

Personally, I value interesting over beautiful. Symmetry and order is to art what harmony and repetition is to music – it’s bland in excess. But more importantly, there is no reason why the sense of beauty that humans have developed during evolution should have any relevance for the fundamental laws of nature. Using beauty as guide is even worse than appealing to naturalness. Naturalness, like beauty, is a requirement based on experience, not on logic, but at least naturalness can be quantified while beauty is subjective, and malleable in addition.

Frank Wilczek has an interesting transcript of a talk about “Quantum Beauty” online in which he writes
“The Standard Model is a very powerful, very compact framework. It would be difficult... to exaggerate.. its beauty.”
He then goes on to explain why this is an exaggeration. The Standard Model really isn’t all that beautiful as with all these generations and families of particles and let’s not even mention Yukawa couplings. Frank thinks a grand unification would be much more beautiful, especially when supersymmetric:
“If [SUSY’s new particles] exist, and are light enough to do the job, they will be produced and detected at [the] new Large Hadron Collider – a fantastic undertaking at the CERN laboratory, near Geneva, just now coming into operation. There will be a trial by fire. Will the particles SUSY requires reveal themselves? If not, we will have the satisfaction of knowing we have done our job, according to Popper, by producing a falsifiable theory and showing that it is false.”
Particle physicists who have wasted their time working out SUSY cross-sections don’t seem to be very “satisfied” with the LHC no-show. In fact they seem to be insulted because nature didn’t obey their beauty demands. In a recent cover story for Scientific American Joseph Lykken and Maria Spiropulu wrote:
“It is not an exaggeration to say that most of the world’s particle physicists believe that supersymmetry must be true.”
That is another exaggeration of course, a cognitive bias known as the “false-consensus effect”. People tend to think that others share their opinion, but let’s not dwell on the sociological issues this raises. Yes, symmetry and unification has historically been very successful and these are good reasons to try to use it as a guide. But is it sufficient reason for a scientist to believe that it must be true? Is this something a scientist should ever believe?

Somewhere along the line theoretical physicists have mistaken their success in describing the natural world for evidence that they must be able to recognize truth by beauty, that introspection suffices to reveal the laws of nature. It’s not like it’s only particle physicists. Lee Smolin likes to speak about the “ring of truth” that the theory of quantum gravity must have. He hasn’t yet heard that ring. String theorists on the other hand have heard that bell of truth ringing for some decades and, ach, aren’t these Calabi-Yaus oh-so beautiful and these theorems so elegant etc. pp. One ring to rule them all.

But relying on beauty as a guide leads nowhere because understanding changes our perception of beauty. Many people seem to be afraid of science because they believe understanding will diminish their perception of beauty, but in the end understanding most often contributes to beauty. However, there seems to be an uncanny valley of understanding: When you start learning, it first gets messy and confused and ugly, and only after some effort do you come to see the beauty. But spend enough time with something, anything really, and in most cases it will become interesting and eventually you almost always find beauty.

If you don’t know what I mean, watch this classic music critic going off on 12 tone music. [Video embedding didn't work, sorry for the ad.]

Chances are, if you listen to that sufficiently often you’ll stop hearing cacophony and also start thinking of it as “delicate” and “emancipating”. The student who goes on about the beauty of broken supersymmetry with all its 105 parameters and scatter plots went down that very same road.

There are limits to what humans can find beautiful, understanding or not. I have for example a phopia of certain patterns which, if you believe Google, is very common. Much of it is probably due to the appearance of some diseases, parasites, poisonous plants and so on, ie, it clearly has an evolutionary origin. So what if space-time foam looks like a skin disease and quantum gravity is ugly as gooseshit? Do we have any reason to believe that our brains should have developed so as to appreciate the beauty of something none of our ancestors could possibly ever have seen?

The laws of nature that you often find listed among the “most beautiful equations” derive much of their beauty not from structure but from meaning. The laws of black hole thermodynamics would be utterly unremarkable without the physical interpretation. In fact, equations in and by themselves are unremarkable generally – it is only the context, the definition of the quantities that are put in relation by the equation that make an equation valuable. X=Y isn’t just one equation. Unless I tell you what X and Y are, this is every equation.

So, are the laws of nature beautiful? You can bet that whatever law of nature wins a Nobel prize will be called “beautiful” by the next generation of physicists who spend their life studying it. Should we use “beauty” as a requirement to construct a scientific theory? That, I’m afraid, would be too simple to be true.