Friday, December 31, 2010
Saturday, December 25, 2010
32-33-34-35-36 formerly known as 37-38-39-40-41.
The solution we are looking for is 9-7-30-15-5-12 32-14-37-21-41.
This year's price is a BackRe(Action) mug and it will go to the first who submits the right answer in the comments. (For the shipment, we'll need your snail-mail address. If you are not willing to provide your address anyway, please do not spoil the fun.)
If it seems the quiz is more difficult than I thought, I'll leave some hints in the comments later.
Here's the solution.
1) ICECUBE, a neutrino experiment at the South Pole, picture taken from here, more info here.
2) ATLAS, LHC's largest detector, picture taken from here, more info here.
3) Super-Kamiokande, a neutrino experiment in Japan, picture taken from here, more info here.
4) The Cryogenic Dark Matter Search CDMS in the Soudan Underground Lab, picture taken from here, more info here.
5) Fermi formerly known as GLAST, NASA's Gamma ray space telescope, picture taken from here, more info here
Thursday, December 23, 2010
Ironically, now that I've made it full term, the docs tell me that for the sake of my own health the pregnancy better not continue too much longer. The overstretched tissue, so they claim, brings a heightened risk of severe bleeding or uterine rupture which I'm admittedly not too keen on. Add to this that I've developed some late pregnancy complications that, while at present not of immediate concern, are not beneficial neither for mine nor for the babies' health when they persist longer. Luckily, the girls are both positioned head down, so I'm at least not a priori in need of a cesarean section. I am now scheduled for induction of labor the week after Christmas - unless something happens till then - and I hope this goes well. The babies' weight is now estimated above 2.5 kg each and they are all ready for their first own breath.
That means you'll have to expect it being quiet on this blog for some while till I've recovered and we've accommodated ourselves with the new situation. However, pregnant or not, we will of course still have our annual Christmas quiz! (See here for the ones from 2007, 2008 and 2009). This year's quiz is prescheduled for Dec. 25th, 5pm CET, in the hope that this is a convenient time for the majority of our readers. The price is a BackRe(Action) mug, so don't miss it.
We wish you all a happy season and a peaceful Christmas time.
Monday, December 20, 2010
Our cosmos was "bruised" in collisions with other universes. Now astronomers have found the first evidence of these impacts in the cosmic microwave background.
This left me deeply puzzled because I had read the paper in question:
- First Observational Tests of Eternal Inflation
By Stephen M. Feeney, Matthew C. Johnson, Daniel J. Mortlock, Hiranya V. Peiris
arXiv:1012.1995 (see here for an extended version)
yet seemed to have read something completely different out of it. So what's this all about?
The cosmic microwave background (CMB) we measure today is a relic from the time when the universe was only 300,000 years old and radiation decoupled from matter. Since then, photons could travel almost undisturbed. Thus the radiation, especially the fluctuations around its mean temperature, contain valuable information about the history of the universe. The CMB temperature fluctuations have been measured with great precision by the, now completed, WMAP mission and I'm sure you've all seen their skymap.
There are several models of inflation that differ in the detailed predictions, but the rapid expansion is what they have in common. A particular variant of inflation is called "eternal inflation." As the name says, in that case inflation does not end completely but continues eternally. The way this is thought to happen is that inflation only ends locally when a metastable "false" vacuum state decays into a "true" vacuum state and subsequently continues along a local inflation scenario that ends and results in matter formation and gives rise to a patch like our own, commonly called "bubble universe." However, the areas of false vacuum never decay away completely because they expand more quickly than they can decay. As a result, new bubble universes continue to be formed out of the false vacuum eternally.
That finally brings us to Feeney et al's paper. Inspired by earlier work by Aguirre et al (Towards observable signatures of other bubble universes, arXiv:0704.3473) they studied the possibility that a bubble collision in our past has left an imprint in the CMB. Their paper basically presents a particular analysis scheme for the CMB temperature fluctuations. Projected on the 2-dimensional surface of last scattering, the leftover signal would have azimuthal symmetry. They assume that a bubble collision has left a mark in the CMB that consists of a slightly different temperature in such an azimuthal patch.
They use an algorithm to analyze the temperature fluctuation that works in three steps. First, search for areas with azimuthal symmetry. Second, search for edges where the temperature makes a slight step. Third, if you've found that, look for the best parameters to reproduce what you've found. They then go on to create fake CMB fluctuations with signals of bubble collisions to quantify how well their algorithm works. The picture below, taken from Feeney et al's paper, depicts the stages of this simulation. Each quarter of the skymap is supposed to show the same area, just mirrored horizontally and vertically. The upper left part shows the patch with the temperature variation from the bubble collision without fluctuations superimposed (the Mollweide projection used to plot the map distorts the shape). The upper right part adds random fluctuations. Now the task is to get the signal back. The lower left part shows the result of looking for patches of azimuthal symmetry, the lower right one the result of looking for edges with temperature steps.
After testing out their algorithm with fake data to understand what features it is able to identify with certainty, they come to the interesting part and analyze the actual CMB data. Their algorithm doesn't find edges, but identifies 4 regions of interest whose features could possibly have been caused by bubble collisions. As the authors put it, these features are "compatible" with having been caused in that way. Two of these spots of interest btw have previously been discussed, one is the well-known CMB "cold spot," the other was identified in this paper which made use of a similar analysis as Feeney et al. It is important to emphasize though that the identification of these spots was based solely on the symmetry and they were not able to find the second identifier, the edge of the spot. For this reason the authors are careful to make clear:
"Without the corroborating evidence of a circular temperature discontinuity, we cannot claim a definitive detection [...] Azimuthally symmetric temperature modulations are not unique to bubble collisions."
Though it might be that better data from the Planck satellite will allow to extract a less ambiguous signal in the coming years, this is so far clearly no evidence for a bubble collision. Feeney et al's results are just once again evidence that there's some features in the CMB.
One also has to keep in mind that their paper already starts from the assumption that the signal of a bubble collision is of such a particular sort of merely resulting in a small temperature difference. It leaves entirely open the question how likely it is that a particular model of eternal inflation would result in such a signal that is just barely observable rather than in features entirely incompatible with what we've seen so far. It is entirely unclear to me for example what would happen if the vacuum in the other bubble or possibly even its physical constants were different from ours. It seems quite unlikely that a tiny temperature modulation is all that would come out of it. I don't think anybody has at this point a comprehensive picture of what might happen in a general bubble collision. The question is then if not it is extremely improbable that our bubble was subject to a collision and that collision, rather than wiping us out, was just nice enough to reveal itself in the upcoming Planck data.
In any case, the analysis put forward in Feeney et al's paper serves to rule out some regions of the parameter space in models that produce such an imprint in the CMB. Such constraints are always good to have. It is a nice and very straight-forward paper presenting an observer's take on eternal inflation. It's a very worthwhile analysis indeed - imagine how exciting it would be to find evidence for other universes! However, so far the evidence leaves waiting.
Update: See also one of the author's guest post at Cosmic Variance Observing the Multiverse.
Thursday, December 16, 2010
- Time to democratise science
By Dan Hind
Anyway, given this troublesome bias allegedly caused by funding sources Hind concludes
"[I]t is surely time to consider an alternative. If we are serious about science as a public good, we should give the public control over the ways in which some - and I stress "some" - of its money is spent.
I propose taking a portion of the money that subsidises private industry and giving it to new bodies set up to allocate resources on the basis of a democratic vote. Scientists could apply to these bodies for funding and we could all have a say in what research is given support."
Well, a lot of science funding comes from the national science foundations. And their agenda is set by national politics for control of which we go and cast our vote on election day. That this didn't prevent the issue with economics research demonstrates there's shortcomings in the academic system which spoil objectivity other than bribery. So much about Hind's motivation for his argument.
Leaving aside the shaky reasoning, I am afraid then that what Hind means with "democratic vote" is not a representative democracy. If his "alternative" is supposed to be something new, he must be talking about a grassroots democracy; the wisdom of the masses and all. That interpretation of his proposition of "democratisation" also goes well with him being the author of a book called The Return of the Public that according to the blurb "outlines a way forwards for a new participatory politics." But back to the New Scientist article, Hind explains the benefits of letting the public decide on research funding by referendum:
"Think what such a system could achieve. With public support, the few economists that predicted the financial crash could have gained greater access to publicity as well as more research resources. Public concern with environmental degradation could guide much-needed funds into alternative energy research."
I have no clue why that should be. In fact, I suspect a public "voting" on what scientific research projects deserve funding would make matters significantly worse rather than better. The main reason is that it would set incentives for researchers to produce results the public wants to hear rather than rely on their own sense of what is important. Hind continues:
"There is no good reason I can see why science funding could not be made subject to democratic decision-making. Yes, it will hand power to non-experts, but so does the present system: non-experts in the state and private sector often have a decisive say in what scientists study."
The present system does give non-experts the power to set general guidelines, but the details are left to experts. And that is, imho, good as it is. The problem with the academic system is definitely not that "the taxpayer" has too little say in what researcher's study but rather that the system itself suffers from internal problems. I've written on that many times and don't want to repeat the details here. For more check e.g. my posts Science and Democracy III and We have only ourselves to judge each other. The title of the latter says it all: The only people who can plausibly have an informed opinion on what research projects are worth pursuing are working in the field themselves. The problem with the academic system is, in short, that their opinions are unfortunately influenced by all sorts of external pressures which has the result that the grants are not efficiently used. The cure isn't to replace expert's judgement with that of uninformed people, but to make sure the judgement is unbiased.
In contrast to Hind, I can see several good reasons why science funding should not be made subject to public vote, except for setting the general agenda by assigning funds to the respective agencies and their programs. The reasons are the same reasons why pretty much all democracies on the planet are representative democracies. First, the public opinion changes from one day to the next. That's no basis on which one can pursue research. Second, the public opinion is easily influenced by those who have enough money to spend on media relations and search engine optimization. This works completely against Hind's own argument that the problem is the influence of wealthy people and cooperations.
The third and most important point is that the very reason academic research is mostly funded as a public good rather than through individual investments is that despite its recognized relevance for the well-being and progress of our societies it's such a long-term investment that very few people would privately invest money in it. Asking them to then decide on where the money they wouldn't individually invest should be spent, one has zero reason to believe that the money would be well spent.
(A fourth reason why the public opinion may not be suitable to call upon directly for decision making is that it may be inconsistent, but that's not a relevant point here. For more on that see my post The Nature of Laws.)
Hind explains his opinion:
"Certainly the public will sometimes support research that seems fanciful to informed insiders. We won't always spend our money wisely. But the opportunity to exercise power is a great educator. The successes and failures of democratically funded science would promote a much more vigorous public debate about the purpose of research."
The big problem is that it may take decades or even centuries to figure out what a success or a failure is. The feedback loop in this education is way too long to be effective; it's not something that will lead to an optimization. That's the reason such a lot of research is pursued as public service to begin with.
Let me be very clear here. I write this as a taxpayer myself that I am not qualified to make certain judgements. I preferably delegate my voice to somebody who has the time and makes the effort to obtain and survey all the relevant information on some decision rather than making a sloppy and uninformed decision myself because, after all, I have a job. In other words, I believe representative democracies are a good system (though there's no doubt they could use some improvements). There is place in our societies for direct public votes and we have tools for exactly this purpose. The funding of research projects just clearly isn't one of them.
Wednesday, December 15, 2010
By Mary Roach
W. W. Norton & Company; Reprint edition (May 2004)
After my previous read on the beginning of life, this one is about the end of it. Mary Roach has collected data, historical facts and curious anecdotes on the fates of human corpses. Despite the unappetizing topic, it is an entertaining read.
Roach discusses decay, burial and its alternatives, giving one's body to science for anatomical studies (where one might serve as a practice for face lifting), organ donation and brain death, plastination, preservation, embalming or becoming a post-mortem crash-test dummy. You, or at least parts of you, can also end up being shot at to study the stopping power of bullets. She further covers the examination of victims of fatal accidents, for example plane crashes, to obtain information about the accident's cause, cannibalism, and experiments that were done to determine whether the shroud of Turin is authentic.
She evidently did a lot of reading and in many cases went to visit the places where experiments were made and talked to the scientists. Roach also does not hold back with her opinion, neither on organ donation nor on the credibility of some scientists or their publications. Thomas Edison for example comes off as “a loopy individual” and she remarks about one author “[He] is not a doctor, or not, at least, one of the medical variety. He is a doctor of the variety that gets a Ph.D. and attaches it to his name on self-help book covers. I found his testimonials iffy as evidence...” One might or might not agree with her opinions, but I found it very refreshing that she speaks her mind and does not leave the reader with a white-washed who-said-what, an unfortunately wide-spread habit among science writers that is sold as balanced reporting but eventually is mostly useless reporting. She also doesn't swallow every story she's read but goes to try verify it herself, as for example in a case of cannibalism reported from China that turns out to be made up. While the report on her travel to China is somewhat pointless in that it doesn't contribute to the theme of the book, it speaks for Roache's fact checking.
The book is full with absurdities from the history of science, such as techniques used in the 18th and 19th century to verify death, among them putting insects into the corpse's ear or rhythmic tongue-pulling for three hours following the suspected death. The reader also learns that the average human stomach bursts when stretched over a volume of approximately 4 liters, and that the Urban Institute in 1991 calculated the value of one human life at US $ 2.7 million. (One is left to wonder whether that's the global average or the value of US citizens.) On some topics I found the coverage thin and would have expected more details, for example on the history of burial or the progress in organ transplantation. I was also surprised that the fate of Einstein's brain didn't even make it into a footnote.
I guess there's only two ways to approach the topic of decaying human remains, either with gravity and philosophy or with humor. Mary Roach does it with humor and she does well, though her jokes become quite foreseeable after a few chapters. A little disturbing I found her tendency to self-degradation and portraying herself as an annoying person who her interview partners must think badly about, reflected in sentences like “[He] throws me a look.... [The look] says I'm a petit bouchon fécal [French, roughly: little piece of shit]” or “She considers this fact. I am feeling more like last week's coleslaw than usual.” It's probably supposed to be funny-ha-ha, but it makes me wonder about the author's self-image.
Taken together, the book is smoothly written, entertaining and covers the topic well. If this was an amazon review, I'd give five stars for flawlessness. Having finished “Stiff” I have to say though that after all the topic isn't one I'm particularly interested in. The book has however provided me with plenty of useless knowledge that is certain to make me a memorable guest when offered at the next dinner party.
Friday, December 10, 2010
- In my post It comes soon enough I speculated on some future developments, among them:
“I've been thinking... that... it would be possible to grow meat suitable for consumption without having to bother with the whole animal. [A] century from now, we'll have factories with organ bags that resemble nothing like animals at all.”
In an interview of Time Magazine with Ray Kurzweil I read last week:
“We'll grow in vitro cloned meats in factories that are computerized and run by artificial intelligence. You can just grow the part of the animal that you're eating."
For the complete interview, see 10 Questions for Ray Kurzweil
- If you want more evidence that I have my thumb on the pulse of time: In my post Why'd you have to go and make things so complicated? I remarked on the the predictability of complex systems:
“You don't need to predict the dynamics of the system. You just need to know what parameter space it will smoothly operate in so optimization works.”
A recent article by Seed Magazine quotes Tom Fiddaman who, in collaboration with MIT and the Sustainability Institute, examines the policy implications of dynamic complexity in climate and economic models:
“You are in a sort of dance with this complicated mess,” he says, explaining that it is impossible to determine the individual steps of this “dance”—and this is in some sense the error of current thinking. Instead, we need to be able to construct robust solutions that provide general guidelines for what style of dance we should be doing. They need to be flexible and capable of withstanding the inevitable unpredictable behaviors of complex systems.
The whole article, titled Knowing sooner, is a very recommendable read.
- I just learned that since July 1st, fast internet access is a legal right in Finland. Don't have much to say about it, just find it noteworthy.
- Most concise paper ever: Unsuccessful treatment of writer's block.
- I spoke to a science writer about What's at the center of black holes - and then forgot about it.
“From a theoretical point of view, the singularity is something where something becomes infinitely large,” said physicist Sabine Hossenfelder at the Nordic Institute for Theoretical Physics. [That's not what she wrote, but what I actually said.]
No one can be sure that their singularity doesn't describe a physical reality, Hoss[en]felder told Life's Little Mysteries. But most physicists would say that the singularity, as theorized by equations, doesn't really exist. If the singularity was “really real,” then it would mean that “energy density was infinitely large at one point,” exactly the center of the black hole, she said.
However, no one can know for sure, because no complete quantum theory of gravity exists, and the insides of black holes are impossible to observe.
- My recent paper with Xavier Calmet and Roberto Percacci just got published.
I wish you all a nice weekend and don't forget to light the 3rd candle.
Friday, December 03, 2010
“[A] scientist involved in basic research is by definition motivated: We do what we do because we are passionate about understanding the universe...
Human ingenuity being what it is, the future will undoubtedly bring applications based on discoveries made with the LHC. Although, as with Newton’s gravity, it may be some time before we’re privy to all of them, and to their implications. For our children and grandchildren, however, I am sure that the wait will have been worthwhile.”
In my earlier post Knowledge for the sake of knowledge, I was complaining that all to often to make a case for the relevance of basic research the argument is that eventually some technology will come out of it. This leaves aside the relevance that knowledge itself has, whether or not it results in some new gadget that you'll find under the tree in a decade, despite the fact that most people working in the field are driven by the gain in knowledge since applications are often too remote to be a tangible personal goal. I think that insights on fundamental questions about the nature of reality themselves have a direct influence on our societies. Consider topics like free will or the multiverse-question whether the physics in our universe is the only one possible or just one of many possibilities. I was thus happy to see that Heuer didn't try to sell the LHC as something that obtains its value merely by its rôle in producing new technologies.
In Canada, basic research is doing well: As you might have read on Peter's blog or in the Globe & Mail, the Bank of Montreal has donated CAN $4 million to Perimeter Institute to establish “the BMO Financial Group Isaac Newton Chair in Theoretical Physics at Perimeter Institute.” Bill Downe, President and Chief Executive Officer of the BMO Financial Group said
“The Institute’s ambitious thirst for new knowledge places it at the very frontier of discovery. Its thinkers can change our world by boldly pushing the boundaries of our current understanding of physical laws. We couldn’t be more proud of this association and hope that our unique investment in the BMO Isaac Newton Chair in Theoretical Physics will enhance innovation in Canada and encourage other private sector donors to fund Chairs at PI.”
So, congratulations to PI! In PI's press release, one also finds a quotation from Mike Lazaridis, founder of Perimeter Institute, who repeats the usual justification for basic research with the prospect of technological applications. In fact, he goes so far to say:
“Theoretical physics has driven the most important insights and technological advances in the history of humankind. Although the outcomes from basic research may not be immediate, they are inevitable...”
That's quite a bold statement, don't you think?
I completely agree with Heuer that basic research is instrumental for progress, but I'm far from sure that basic research of any sort “inevitably” leads to technological advances. Take for example the recent media fuzz about the re-recycled idea that the universe did not start with the Big Bang, and consider for a moment this turns out to be correct. The question is clearly of high relevance for us to understand our place in the universe, but since the distinction between bang and bounce lies 14 billion years in the past I'm having some trouble imagining what technology might possibly come out of an experimental distinction. I can easily imagine what it might be good for to find superluminal propagation of information to be possible, and could come up with a dozen applications for antigravitation. I can imagine that the development of quantum gravity and/or string theory will one day be of relevance for quantum computing, and that finding the Higgs or some alternative mechanism to generate particle masses will in the remote future play a role for energy generation. But especially when it comes to cosmology, it seems to me the outcome is mainly in the realm of pure knowledge, addressing the eternal questions where we come from and where we go to.
But hey, my imagination is finite, so let your fantasy fly free and tell me what inevitable application a big bounce scenario might have one day. Even better, tell me what, in your wildest dreams, will be the outcome of some basic research of your choice in theoretical physics that is pursued today.
Sunday, November 28, 2010
I've read most of Stephen King's book, and "Under the Dome" is clearly one of his best. It's the story of a small city suddenly cut off from the rest of the world by a transparent barrier, the "Dome." On the one hand it's one of these stories that show how many things we take for granted are quite fragile achievements of civilization, like water, electricity, or food supply. On the other hand, King masterly tells how the small-town leaders abuse their power and manipulate the town folks, while ordinary people find their inner hero. Of course the book also has a significant yuck-factor, King-style. The end leaves the scientist somewhat unsatisfied as to explanation, but then King isn't known as a sci-fi author.
One long Saturday in the life of a neurosurgeon. It's an extremely well-told story with very carefully worked out and authentic characters. While there isn't actually much plot in this book, the reader gets to share the mind of the main character, his thoughts about current events, terrorism, the war in Iraq as well as aging and happiness in his own life. I found the book in parts quite annoying because of side-long detailed explanations about every move in a squash match or how to cook a bouillabaisse for dinner, but if you occasionally like to see the world through somebody else's eyes, this book is for you.
The main character of this novel is Michael Beard, a Nobelprize winner, now in his late 50s, with a long history of marriages and affairs. He doesn't see how he can make further contributions to physics, so he sets out getting famous in the flourishing business of clean energy and climate change. The story is a mixture of his private life with his attempt to leave a mark in history by not-so noble means The physics is sufficiently plausible, the author has clearly done his homework, and I found the story highly amusing and entertaining. As with "Saturday," by reading this book you'll get to see the world through somebody else's eyes. Very recommendable.
The main character of this book is Edgar who, after a work-accident that leaves him one-armed, loses also his wife and moves to Florida for a new start. There, he finds he has acquired a new talent, painting. And not only does he suddenly come to fame by his new talent, his paintings also have an eerie influence on his and other people's lives and bring him in contact with scary powers that awake from a long sleep. Together with newfound friends, Edgar sets out to battle these powers and put them back to sleep. It's a well-written story and an easy read, though there are repeated remarks about some good power watching over our heroes, so they "just know" what to do, which is never explained. The reader is left to wonder what this is all about, definitely not a feature I've encountered in earlier Stephen King novels.
King tell's the story of Lisey, the wife of a recently deceased famous who had, one could say, access to a parallel world. King being King, besides the writer's inspiration there's monsters and dangers lurking in that world. The story of that other world is woven together very nicely with Scott's family history and his marriage. The story is told after Scott's death, when Lisey has to deal with a mentally distorted person who is threatening her. However, the plot takes several hundred pages to actually start, and then lots of it doesn't make very much sense. Lisey is constantly following some intuitions for doing this or that which are never explained (similar to "Duma Key"), but she "just knows" it's the right thing to do. It's very unsatisfactory.
Wednesday, November 24, 2010
Some examples that came to my mind were the "élan vital" (the belief that life is some sort of substance), the theory of the four humors (one consequence of which was the wide spread use of bloodletting as medical treatment for all sorts of purposes), the static universe, and the non-acceptance of continental drift. On the more absurd side of things is the belief that semen is produced in the brain (because the brain was considered the seat of the soul), and that women who are nursing turn menstruation blood into breast milk. From my recent read of Annie Paul's book "Origins" I further learned that until only some decades ago it was widely believed that pretty much any sort of toxins are blocked by the placenta and do not reach the unborn child. It was indeed recommended that pregnant women drink alcohol, and smoking was not of concern. This dramatically wrong belief was also the reason why thalidomide was handed out without much concerns to pregnant women, with the know well-known disastrous consequences, and why the fetal alcohol syndrome is a fairly recent diagnosis.
I was collecting more examples, not very actively I have to admit, but I found yesterday that somebody saved me the work! Richard Thaler, director of the Center for Decision Research at the University of Chicago Graduate School of Business, is working on a book about the topic, and he's asked the Edge-club for input:
"The flat earth and geocentric world are examples of wrong scientific beliefs that were held for long periods. Can you name your favorite example and for extra credit why it was believed to be true?"
You find the replies on this website, which include most of my examples and a few more. One reply that I found very interesting is that by Frank Tipler:
"The false belief that stomach ulcers were caused by stress rather than bacteria. I have some information on this subject that has never been published anywhere. There is a modern Galileo in this story, a scientist convicted of a felony in criminal court in the 1960's because he thought that bacteria caused ulcers."
I hadn't known about the "modern Galileo," is anybody aware of the details? Eric Weinstein adds the tau-theta puzzle, and Rupert Sheldrake suggests "With the advent of quantum theory, indeterminacy rendered the belief in determinism untenable," though I would argue that this issue isn't settled, and maybe never will be settled.
Do you know more examples?
Saturday, November 20, 2010
And if you think I was actually sleeping, just right of the photo, there's a table (you can see one edge) with a pile of papers on it...
Wednesday, November 17, 2010
- We discussed several times on this blog the question how plausible metrics for scientific success are, see for example my posts Science Metrics and Against Measure. This week, the NYT reports an amusing fact from the recent Times Higher Education university ranking in the article Questionable Science Behind Academic Rankings: Alexandria University in Egypt made it on the list on rank 147 (together with Uppsala) as the only Arab university. Just that, upon closer inspection, this success goes back to the enormous productivity of one researcher... and that is no other than Mohamed El Naschie. If you recall, two years ago we mentioned El Naschie's amazing publication record of more than 300 papers within a few years, published in a journal of which he also was editor-in-chief. He retired from his position a few weeks later. The NYT reports:
“But the news that Alexandria University in Egypt had placed 147th on the list — just below the University of Birmingham and ahead of such academic powerhouses as Delft University of Technology in the Netherlands (151st) or Georgetown in the United States (164th) — was cause for both celebration and puzzlement. Alexandria’s Web site was quick to boast of its newfound status as the only Arab university among the top 200...
Like most university rankings, the list is made up of several different indicators, which are given weighted scores and combined to produce a final number or ranking...
Phil Baty, deputy editor of Times Higher Education, acknowledged that Alexandria’s surprising prominence was actually due to “the high output from one scholar in one journal” — soon identified on various blogs as Mohamed El Naschie, an Egyptian academic who published over 320 of his own articles in a scientific journal of which he was also the editor. In November 2009, Dr. El Naschie sued the British journal Nature for libel over an article alleging his “apparent misuse of editorial privileges.” The case is still in court.”
- Somehow scary:
“In this edition, we have added, for the first time, annotated references in the text to provide the beginning of an evidence based approach to clinical methods.”
From the preface of “Clinical Examination,” by Nicholas J Talley & Simon O'Connor, 4th Edition, 2001.
- The results from our recent poll: Is the scientific process one of discovery or invention? A total of 167 people took the poll. To my surprise, most them shared my opinion. The replies were: Both - 52.1%, Discovery - 33.7%, Invention -10.4%, Neither - 1.8%, Don't know - 1.8%.
- Chad Orzel discusses the statistic on the initial employment of new PhDs in physics from 1979 to 2008 in his post Physics Job Market: Same As It Ever Was. Slightly more than 50% of new PhDs presently go on to make a postdoc...
You may find yourself living in a shotgun shack... and you may find yourself in another part of the world... You may ask yourself, "Well, how did I get here?"... Same as it ever was... Same as it ever was... (Talking Heads, Once in a Lifetime).
- And if you think that statistic doesn't look so bad, you may want to watch this:
[Via Dynamics of Cats]
You may ask yourself... How do I work this?
El Naschie commented the following on the university ranking:
“I do not believe at all in this ranking business and do not consider it anyway indicatory of any merit of the corresponding university.”
Monday, November 15, 2010
Religion and science are not two different approaches to explain the world that need to be made compatible, possibly by discarding one entirely. Instead, religion (as well as other superstitious believes) is a historical pre-phase to scientific thinking. It’s a primitive exercise in story-telling and sense-making, that has proven to be of advantage for its practitioners. Religions are part of our historical legacy, but the wide spread of religion we currently witness is a temporary phase. Both are not compatible in the same sense that an MP3 player isn’t compatible with living in the Stone Age. But then, people in the Stone Age were happy even without MP3 players.
As an atheist, my interest in religion stems from them having a major influence on our history and their lasting effect in shaping our societies. It is an interesting question: Why is it that so many people, all around the globe, believe in some god when that god and its tales are in outright conflict with scientific evidence or, in the better case, without evidence whatsoever?
In an earlier post, I reported on results of a study looking for the neurological origin of religion. There have been a lot of studies in that direction during the last decade. Almost all of them however do not so much look for the origins of religiosity as more for the origins of supernatural thinking, or call it jumping to conclusions. The human brain looks for explanations, tries to find patterns, and to construct theories. These are skills that have proven very useful to our survival. Inventing gods arguably serves as some sort of explanation. Yet, superstition generally serves that purpose too, and at the end of the road, if you carefully follow up on the explanations, if you construct correct theories, where you inevitably arrive is: science! And over the course of history, that’s the path we’ve taken: Starting from gods and superstition towards science by continuing to ask and to look for better and more useful explanations.
Thus, one could equally well say our looking for patterns and aiming to construct explanations is not only the origin of religion, but the origin of scientific thinking and drawing upon such neurological basis to claim religion is hard-wired just confirms what we see in the world around us but doesn’t explain it. On that level, the difference between religion or superstition and science is simply how carefully you investigate the data and how much you learn about how to construct consistent theories of reality, ie the question is at which level of explanation do you stop searching. Tests for activity in certain brain regions look for activity on the very-short to short timescale. But that’s not all what constitutes human cognition. In contrast, compared to other species humans are particularly good in careful deliberation, reflection and advance planning. That’s the basis of our success. Not so surprisingly then, most, if not all, religious people sooner or later have doubts in the reality of what they believe in. This doubt has to be constantly silenced to be a good believer, and many people manage indeed to lull themselves into a state of constantly belying their own intelligence. But the existence of these doubts tells us that indeed religion is not the natural endstate of the search for explanation. The human mind wants to question and solve the puzzles. It wants to understand, not to be shut up. It wants to know more.
So, compared to science religions don’t make particularly convincing or useful explanations for anything and knowing what we know today, belief is only possible with working against ones’ own intelligence. We are then left to wonder still, why do so many people believe?
In a recent opinion piece on the NYT blog, Tim Crane (who declares himself an atheist) wrote, in essence, that people chose to believe because it’s easier:
“[S]cientific explanation is a very specific and technical kind of knowledge. It requires patience, pedantry, a narrowing of focus and (in the case of the most profound scientific theories) considerable mathematical knowledge and ability...
Religious belief is a very different kind of thing. It is not restricted only to those with a certain education or knowledge, it does not require years of training, it is not specialized and it is not technical. (I’m talking here about the content of what people who regularly attend church, mosque or synagogue take themselves to be thinking; I’m not talking about how theologians interpret this content.)...
I would guess that very few people in the world are actually interested in the details of contemporary scientific theories... [M]ost people aren’t deeply interested in science, even when they have the opportunity and the basic intellectual capacity to learn about it.”
I don’t find this explanation plausible. True, if one wants to understand the details of modern science it requires time and effort. But that’s true for everything, including religion, as Crane points out himself. Understanding the Bible (to the extent possible) also requires patience and a narrowing of focus. It would already make a difference if more people would at least understand contemporary science on the level they understand their weekly sermon. No, the actual reason why people have more knowledge about their religion than, say, modern cosmology, is that they go to church every Sunday instead of going to a physics lecture. Their mindset is a consequence, rather than the origin of what they spend time on. Just think about how amazingly quickly seriously ill patients learn details about their disease, up to the level where they know more about recent research than their doctors. Suddenly they are deeply interested indeed, it’s just a matter of motivation.
With Crane’s explanation not being convincing, let us ask again: why do so many people believe? I think it’s because religious belief has both psychological as well as social advantages, and that serves as a motivation science is often lacking. Let us start with the psychological advantage. Existential psychotherapy is a particularly simple (and overly simple) model of the human mind. It posits that we all have four so-called “existential fears” – the fear of death, of loneliness, of meaninglessness and the fear of freedom. (The latter refers to the fear of responsibility one has for one’s own decisions.) Psychological problems occur if one or several of these fears take overhand and stifle personal development. Religions neatly address all of these fears. The social advantage comes from being a member of a global community with shared traditions that in many cases are very well organized, providing counseling and support in difficult phases of one’s life. The believer belongs, and he knows where he belongs.
The question is now what has science to offer?
On the psychological and social level, science has no such offers to make – at least not yet, and not obviously so. But I think these offers will come, and they will become more and more widely accepted.
On the psychological level, let us first mention again that science has the advantage of allowing –indeed welcoming – skepticism and doubt. The disadvantage is that one has to accept uncertainty as necessary ingredient. Science does not address the four existential fears as directly as religion does, but it does to some extent and that’s becoming more and more noticeable. There are for example numerous research programs trying to understand, explain, and modify human mortality. Of course these are on very different levels of scientific rigor and plausibility (ranking from freezing in one’s brain, over uploading oneself to a computer, to improving the body’s DNA repair mechanism). And of course they are not as complete comfort as believing in an immortal soul that goes to heaven or is reborn, but they offer a ground to grapple with the process of one’s own aging and death. The fears of loneliness, meaninglessness, freedom: There is lots of scientific research which addresses one or the other, on social, neurological, psychological, or philosophical grounds. That again is becoming more and more noticeable. Just have a look at the abundance of self-help books (e.g. on the topic of happiness). Yes, most of them are based on pseudo-science rather than sound science, but at least it’s a start. The point is not the quality of the science, the point is these approaches are non-religious. It’s a begin of a change of which I’m convinced we’ll see more.
And finally, of course a major role is played by the ancient questions: Where do we come from? and What are we made of? Theoretical physics is on the most fundamental level of our search for explanations. This is why the questions asked in this field seem quite detached from life and understanding current research – and its relevance – takes time and effort indeed. But it is this research we need to truly understande our place in the universe.
Monday, November 08, 2010
"I'd rather be a really good one-term president than a mediocre two-term president."
A rare case of a politician with a backbone. Given that, I can't say I was surprised by the election outcome. No, what depressed me was the lacking substance of arguments. The American nation strikes me as similar to a group of overweight people who at their first weight watchers meeting chants "Yes, we can" and cheer upon change. But when change is staring back from the dinner plate, and change on the scale leaves waiting, they realize change doesn't come easy. And the vast majority of them still doesn't know the difference between social democracy and socialism. Clearly, the world would be a better place if everybody would read my blog ;-)
Anyway, to some extend I don't care very much how the Americans organize their society. I think they're not fully using their potential, and find that a shame, but after all it's their decision what they put on their plates and shovel down their throats while I, well, I live in Sweden. And that brings me to one of the most amusing studies I've come across lately:
- Building a Better America – One Wealth Quintile at a Time
By Michael I. Norton and Dan Ariely, PDF here
Michael Norton, from Harvard Business School, and his colleague Dan Ariely, from Duke University, asked a random sample of US citizens what wealth distribution they think is ideal. In 2005, they surveyed 5,522 people. Asked for their voting pattern in the 2004 election, the sample reproduced well the actual voting result. The survey respondents were given a definition for wealth so there was no ambiguity. Then they were shown three pie charts. Each slice of the pie represents 20% of the population, from the poorest to the wealthiest. The size of the slice is the wealth owned by this group. One pie showed a perfectly equal distribution. The other two pies were unlabeled but showed the distribution of the USA and that of Sweden.
The result: 47% of Americans preferred the Swedish wealth distribution, followed by 43% for the equal distribution, while only 10% found ideal the actual distribution. Just focusing on the Swedish vs the US distribution, 92% of Americans prefer the Swedish one over their own.
[Source: Fig 1 of this paper]
It turns out that these preferences depend only very little on demographic factors like gender or whether they voted for Bush or Kerry in 2004. Considered how convinced Americans tend to be about their own greatness, this result seems somewhat puzzling. However, keep in mind that these pie charts were unlabeled in the questionnaire. The replies makes sense if you come to the next question. In that, survey respondents were asked first to guess the wealth distribution in the USA, and then chose what distribution they would find ideal. It turns out that most Americans severely underestimate the rich-poor gap in their own country, and in addition would prefer a distribution that is even more equal than their erroneous estimate. This is shown in the figure below.
[Source: Fig 3 of this paper]
Again, note how little both the estimate as well as the ideal depends on demographic factors.
This result fits quite well with previous studies which had shown that Americans overestimate the social mobility in their own country. They're still dreaming the American dream, despite its evident conflict with reality.
After I stopped laughing I started wondering what this result really means. The survey respondents are very clearly considering the present wealth distribution as not ideal. However, the wealth distribution is a fairly abstract observable. Would you have been able to accurately estimate it? My own estimate would have been considerably closer to the actual one than the average guess, but that's only because I happen to have seen the relevant numbers before.
Norton and Ariely had a good reason to ask these questions: The philosopher John Rawls proposed that justice should be identified by taking a position behind a "veil of ignorance." For that, you're supposed to imagine that you decide on a particular question - for example the distribution of wealth - and only after you've decided you'll be randomly assigned a position within that society you've just created. I've never been really convinced by that approach. It's much too heady, or call it utopian. As a matter of fact, people don't live behind a veil of ignorance and their own social status does influence their decisions. Also, it isn't only the ideal (size 4!) that's relevant but also the way to get there (diet). In fact, the way is typically the question that's more immediate and thus more prominent on people's mind.
If one just asks people what they think is ideal, you're probing their ideas about what they believe the wealth distribution means, not necessarily what they actually want. To get to the relevant point, one would have to ask for factors that actually affect their life, or are such that they have some basis to judge on. Social mobility for example, the possibilities that are open for them and their children, is a relevant factor, and it is of course related to the wealth distribution. Or, instead of asking for the distribution of wealth, maybe better ask if they think somebody's work is really worth a 1000 times more than somebody else's. Another factor, and the one that bothers me most, is that wealth means power and it means influence. How much influence on your life do you want a small group of people to have? And at which point does this run into conflict with democratic decision making?
Bottomline: This is an interesting study. It explains a lot of things about the US American attitude towards their country's income distribution and the sometimes puzzling disconnect between their wish for change on the one hand and on the other hand their unwillingness to really take the necessary steps: they believe the steps are smaller than they in fact are. However, it's not a result that should have any relevance for policy decisions because the question asked is impractical. One doesn't chose a wealth distribution first and then gets randomly assigned a place in that society. It's not how things work in real life, and it's just replacing one dream with another one. There's always the risk the dream might later turn out to be a nightmare.
Aside: Dan Ariely, one of the authors of the study, writes a blog. He commented on his own paper here.
Saturday, November 06, 2010
My first reaction was if you dislike learning, a university isn’t the right place for you no matter what field you chose. Then I thought he might be disliking not learning in general, but a particular sort of learning. It might be useful to distinguish the following four types of learning:
- Physical learning
Is the training of motion sequences through practice and exercise. Plays a major role for sports, playing an instrument, driving, and so on. It’s aiming for the goal, doing your scales practice, filling cuvettes till you manage without spilling, etc.
- Learning by doing
Is learning from cause and effect, trial and error. Omnipresent theme of children’s toys and school education. Many science museums too work with push here - look there. In contrast to physical learning though, the emphasis is not on learning a particular motion but understanding a relation.
- Knowledge gathering
Is the classical learning of facts and data. Avogadro’s number is about 6 x 1023. The capital of turkey is Ankara. The milky-way is about 100,000 light-years side to side. The can-opener was invented 48 years after the can. Etc.
- Conceptual understanding
Is the learning of explanations and relations, theories and concepts. What is chaos? How does the stock market work? What makes airplanes fly? Why doesn’t the moon fall down on Earth?
Learning at school as well as at the university is typically a combination of these 4 types of learning. But the composition depends on the field, and it may substantially change from school to university. Languages for example are generally heavy in knowledge gathering. You just have to memorize that vocabulary, no way around it. And you can’t lead any argument in history without the names and dates. Biology, chemistry, physics and mathematics necessitate type 3 learning in declining order. Lab work is the contribution from 2.
At school, you will generally do well just by learning the facts and it is, at least in my experience, also often where the emphasis of the educational system is (except for classes like sports and music which rely on type 1 learning). Especially in mathematics however, it is possible already in school to replace type 3 learning with type 4 learning: You can either memorize a table with functions and their derivative and integrals, or you understand what a derivative and an integral is. You learn the steps you have to do to calculate the intersection of two planes in a 3-dimensional space. Or you understand what the equations mean. Pupils who fly through math are typically those who understand the concepts, rather than learning a scheme for computation.
When you finish school and start studying math or physics, the relevance of memorizing facts drops dramatically. Who cares if you know Avogadro’s number - you can go look it up if you need it. Sure, it’s handy to know the distance from Earth to Sun, but it’s not going to impress your prof. In mathematics, the break with school practice is particularly dramatic. What you’ve done at school doesn’t prepare you for studying mathematics at all, except that some symbols might look vaguely familiar. To quote my younger self:
[W]hat's called mathematics in school has little to do with mathematics. It should more aptly be called calculation. Don't get me wrong, it is essential knowledge to be able to multiply fractions and calculate percentage rates, but it has about as much to do with mathematics as spreading your arms has with being a pilot. Problem is, that's about all most people ever get to know of mathematics. The actual heart of math however is not number crunching or solving quadratic equations, it's the abstraction, the development of an entirely self-referential, logically consistent language, detached from the burden of reality.
Both Stefan and I can recall from our first semesters those students who tried to continue type 3 learning that had worked so well at school. You can indeed just learn by heart what your textbook says what the variational principle is, and reproduce the relevant sentences when asked. You can memorize every example discussed in class, and learn technical terms by writing down definitions on a stack of index cards. This might get you through the first few semesters, but it’s not going to work in the long run. Both Stefan and I have seen dropping out the fellow students who proceeded this way, one after the other.
To come back to the young man’s question. If what you dislike in physics at school is the emphasis on type 3 learning, chances are you’ll do just fine at the university. There’s still the lab exercises where you have to stare at glowing wires for several hours or find anything else on the oscilloscope besides the 50 Hz curve, but if I managed that you can do it too.
I started studying mathematics and only changed to physics after my bachelor’s degree. That was possible because I had taken all the necessary classes and the department of mathematics had a respective agreement with the department of physics. It didn’t cause me any problems, and pretty much all of the additional math came in handy at some later point. I don’t know much about the requirements for informatics, but what I know from friends is that the first semesters are very heavy in math too. So in case of doubt, I’d recommend to start with math because the change to either physics or informatics will be easier than if you do it in any other order. However, since the time I was in my first semesters many regulations have changed to accommodate the European master’s program. I don’t know if, or under which conditions, it is still possible to change field after the first semesters.
In summary: Don’t expect that physics or math at the university is a continuation of what you’ve done at school, neither for what success or boredom is concerned. Best is to primarily follow your interests because you will need perseverance and motivation.
Wednesday, November 03, 2010
How the Nine Months Before Birth Shape the Rest of Our Lives
By Annie Murphy Paul
Free Press (September 28, 2010)
I thought the acronym FOAD stands for fuck off and die, but Annie Paul taught me it stands for "Fetal Origins of Adult Disease." Maybe I wasn't the only one with that association, because from her book "Origins, How the Nine Month Before Birth Shape the Rest of Our Lives" I also learned that this research field was later renamed into DOHaD - "Developmental Origins of Health and Disease." And that's what her book is about: The increasing amount of scientific evidence that besides our genetic inheritance and individual experience, who we are and what we will be is influenced by a third, and long neglected, factor - the nine months spent inside our mother's womb.
As the renaming of this flourishing research area indicates, these are interesting studies not only to understand the origins of diseases, but also as guides to the health of coming generations. Unlike our genetic information, the conditions in utero are to some extend accessible for prevention and intervention. It has long been known for example that the same genetic information (genotype) might come in different appearances (phenotype), but exactly how this mechanism works and how the phenotype is affected in particular during gestation has only recently become accessible to scientific investigation.
Annie Paul is a science journalist, and her book is a survey of recent and not-so recent studies on DOHaD, together with historical anecdotes and reports of interviews with scientists, all woven together with the story of her own pregnancy. The book's chapters are (guess) month one to nine, and the reference list is extensive. It is a well-written, classical and flawless piece of a good science journalism. It also comes with the typical weaknesses of the genre. While Paul has thoroughly scanned the literature, she reports rather than explains, and if she has an own opinion on a particular controversial issue, she does not offer it. Since in addition a book on such a popular level cannot explain in much detail the studies it reports on, the reader who doesn't go and check the literature himself has little chances to form an informed opinion. While Annie Paul cleans up with a few decade old myths (for example the advise that showering with baking soda increases the chances of conceiving a boy) most of her book is a collection of topics and studies presently under discussion, and also an outlook on studies planned and consequences of what we have learned or may learn.
She covers the influence of traumatic experiences and stress on the developing fetus, environmental toxins, drugs and medication, and preexisting conditions of the mother (such as overweight or diabetes). The reader learns that there are studies that claim to have shown eating a bowl of cereal in the morning increases a mothers' chance of having a boy - and others that claim the result is nonsense, that a mother's experience of high stress or periods of hunger affects more strongly the survival chances of male than female fetuses, and that daily chocolate consumption of a pregnant woman results in happier babies. Paul also briefly touches on economical factors, citing studies that have shown people born in periods of hunger or wide-spread disease do on the average have a lesser income as adults than those who were born before or conceived after the tough times.
Annie Paul does mostly just document the research, but a few paragraphs here and there she takes on the question what the impact of this research may be on our societies in the future and what the benefit of this area of science may be. She hopes that babies born in difficult social situations - often correlated with malnutrition, drug abuse, stress or trauma - will have chances of doing better than their parents if special care is taken of pregnant woman, or children at risk for problems can be identified in advance and offered targeted help. She also hopes that in cases of natural disasters or war, mothers-to-be will receive psychological support to prevent their babies from being affected.
This all sounds very sensible, but on several instances Paul comes close to arguing for this additional care by an improved economic output: Healthy and happy children grow up to be more productive adults, so our societies should have an interest in this investment. I have encountered similar arguments repeatedly when it comes to health care, and I am wary of the implications. It is a quite slippery slope. If you step on it, you easily slide down to where you'll find that investments that will not pay off should not be made. It is however very likely that understanding the origins of adult's diseases and problems will in some cases lead to a better understanding, but a treatment may not pay off in economic terms. To me, it is more a matter of empathy and solidarity, than one of productivity, to offer such support.
Taken together, Annie Paul's book has provided me with a bulk of interesting and entertaining study results, yet with little insight as to their scientific credibility. It has given me an excuse to munch down Stefan's chocolate, reminded me of the weakness of the male part of our species, and caused me a bad consciousness for not clearing my household of plastics containing Bisphenol A, whether or not scientists will eventually find them reason for concern. Paul's book is an easy read, yet I would have appreciated a somewhat deeper coverage of the underlying science. I'd give this book 3 out of 5 stars if I'd have stars to give - in other words, it's not a must-read, you can wait for the paperback version.
Monday, November 01, 2010
Unfortunately, I'm having some complications that prompted the doctors to put me on medication and bed rest already several weeks ago. I've been on sick leave since, trying to stay horizontally as much as possible, having weekly check-ups. After last week's exam the doctor recommended if I plan on taking any flights before delivery, I should do so rather sooner than later.
So I packed my bag, rebooked my flight - and now I'm back in Germany. The prospect of staying here for almost half a year is admittedly odd. I haven't lived in Germany for almost 7 years now. When I moved to Arizona for my first postdoc I never meant to stay away more than a year. You'd have told me then I'd only come back after a detour through California, Canada, and Sweden, in late 2010, 7 months pregnant with twins, to move in with a guy I've been married to for more than 4 years yet have never shared an apartment with, I'd have declared you nuts.
Funny, the way life goes, eh?
In any case, moving in with Stefan some weeks earlier than planned means I've stepped right into his moving chaos. We're sitting on boxes, waiting for phone and internet, and have no kitchen appliances. Also, we're facing difficult decisions. For example, Stefan is left-handed but I am right-handed. So which side of the toilet do we put the paper?
The babies meanwhile are doing fine, growing properly and kicking stronger every day. My belly's size is presently that of a nine month single pregnancy, yet scarily enough has to grow 10 more weeks.
News to me is that Halloween has become a seasonal event in Germany. When I was a kid, that tradition was pretty much unknown here. Now, people have carved pumpkins on their doorsteps, stock up on candy and welcome another opportunity to go out and get drunk. No, I didn't carve a pumpkin. I feel like one myself, that's enough seasonal event for me.
Within the last decade or so, Germany has also seen a boom of new shopping malls outside the city centers. On the weekend, Stefan and I went to Starbucks in one of these malls in the area. Half of the guests seemed to be Americans, probably because the US Army has troops in nearby Mannheim, and really, where can you go on a weekend other than Starbucks? The whole place was eerily non-national, and crowded in addition, so we ordered our coffees to go. Then somebody left and I managed to occupy a table. Sitting there with the paper cups quickly got us a reprimand from the barista for producing unnecessary garbage. Suddenly the air smelled German again.
Friday, October 29, 2010
- A very nice applet that zooms you through the scales of the universe, all the way down to the Planck length.
- An interesting recollection by Robert Weisbrot of Edward Witten's way to physics:
"I am reminded of a friend from the early 1970s, Edward Witten. I liked Ed, but felt sorry for him, too, because, for all his potential, he lacked focus. He had been a history major in college, and a linguistics minor. On graduating, though, he concluded that, as rewarding as these fields had been, he was not really cut out to make a living at them. He decided that what he was really meant to do was study economics. And so, he applied to graduate school, and was accepted at the University of Wisconsin. And, after only a semester, he dropped out of the program. Not for him. So, history was out; linguistics, out; economics, out. What to do? This was a time of widespread political activism, and Ed became an aide to Senator George McGovern, then running for the presidency on an anti-war platform. He also wrote articles for political journals like the Nation and the New Republic. After some months, Ed realized that politics was not for him, because, in his words, it demanded qualities he did not have, foremost among them common sense. All right, then: history, linguistics, economics, politics, were all out as career choices. What to do? Ed suddenly realized that he was really suited to study mathematics. So he applied to graduate school, and was accepted at Princeton. I met him midway through his first year there--just after he had dropped out of the mathematics department. He realized, he said, that what he was really meant to do was study physics; he applied to the physics department, and was accepted.
I was happy for him. But I lamented all the false starts he had made, and how his career opportunities appeared to be passing him by. Many years later, in 1987, I was reading the New York Times magazine and saw a full-page picture akin to a mug shot, of a thin man with a large head staring out of thick glasses. It was Ed Witten! I was stunned. What was he doing in the Times magazine? Well, he was being profiled as the Einstein of his age, a pioneer of a revolution in physics called "String Theory." Colleagues at Harvard and Princeton, who marvelled at his use of bizarre mathematics to solve physics problems, claimed that his ideas, popularly called a "theory of everything," might at last explain the origins and nature of the cosmos. Ed said modestly of his theories that it was really much easier to solve problems when you analyzed them in at least ten dimensions. Perhaps. Much clearer to me was an observation Ed made that appeared near the end of this article: every one of us has talent; the great challenge in life is finding an outlet to express it. I thought, he has truly earned the right to say that. And I realized that, for all my earlier concerns that he had squandered his time, in fact his entire career path--the ventures in history, linguistics, economics, politics, math, as well as physics--had been rewarding: a time of hard work, self-discovery, and new insight into his potential based on growing experience."
[Via Michael Nielsen, via Hacker News. Read the full speech here.]
- You might already have read it on Nature News: Astronomers have found the to date most massive neutron star with about 2 solar masses. When I read this, a bell was ringing faintly in the dusty back of my head. Meanwhile I've figured out what was ringing: Smolin's Cosmological Natural Selection predicts an upper mass limit for neutron stars of 1.6 solar masses. (See hep-th/0612185, section 3.2).
- Some months ago I was sent a link to an April fools day paper, funny-haha, physicists style. That paper has now resurfaced on my desk: Schrödinger's Cat is not Alone. It's a humorous take on the interpretation of quantum mechanics and cat dynamics. Not the sort of humor that deepens my laugh wrinkles, but I thought some of you might find it amusing.
- Here's something that did give me a good laugh. Real life absurdity:
Nurses find the weirdest stuff. [Via Bora].
I wish you all a nice weekend!
Saturday, October 23, 2010
I have discussed at this blog many times the differences and similarities between the "Marketplace of Ideas" and the free marketplace of products. The most relevant difference is the property the system should optimize. For our economies it is profit and - if you believe the standard theory - this results ideally in a most efficient use of resources. One can debate how well the details work, but by and large it has indeed worked remarkably well. In the academic system however, the property to optimize is "good research" - a vague notion with subjective value. Before nature's judgement on a research proposal is available, what does or doesn't constitute good research is fluid and determined by the scientific community, which is also the first consumer of that research. Problems occur when one tries to impose fixed criteria for the quality of research, some measure of success. It sets incentives that can only deviate the process of scientific discovery (or invention?) from the original goal.
That is, as I see it, the main problem: setting wrong incentives. Here, I want to focus on a particular example, that of accountability and advance planning. In many areas of science, projects can be planned ahead and laid out in advance in details that will please funding agencies. But everybody who works in fundamental research knows that attempting to do the same in this area too is a complete farce. You don't know where your research will take you. You might have an idea of where to start, but then you'll have to see what you find. Forced to come up with a 3-year, 5 point plan, I've found that some researchers apply for grants after a project has already been finished, just not been published, and then spend the grant on what is actually their next project. Of course that turns the whole system ad absurdum, and few can afford that luxury of delaying publication.
The side-effect of such 3-year pre-planned grants is that researchers adapt to the requirements and think in 3-years pre-plannable projects. Speaking about setting incentives. The rest is good old natural selection. The same is true for 2 or 3 year postdoc positions, that just this month thousands of promising young researchers are applying for. If you sow short-term commitment, you reap short-term thinking. And that's disastrous for fundamental research, because the questions we really need answers to will remain untouched, except for those courageous few scientists who willingly risk their future.
Let us look at where the trends are going: The number of researchers in the USA holding faculty positions 7 years after obtaining their degree has dropped from 90% in ’73 to 60% in 2006 (NSF statistics, see figure below). The share of full-time faculty declined from 88% in the early 1970s to 72% in 2006. Meanwhile, postdocs and others in full-time nonfaculty positions constitute an increasing percentage of those doing research at academic institutions, having grown from 13% in 1973 to 27% in 2006.
The American Association of University Professors (AAUP) has compiled similar data showing the same trend, see the figure below depicting the share of tenured (black), tenure-track (grey), non-tenured (stripes) and part-time (dots) faculty for the years 1975, 1989, 1995 and 2007 [source] (click to enlarge).
In their summary of the situation, the AAUP speaks clear words "The past four decades have seen a failure of the social contract in faculty employment... Today the tenure system [in the USA] has all but collapsed... the majority of faculty work in subprofessional conditions, often without basic protections for academic freedom."
In their report, the AAUP is more concerned with the quality of teaching, but these numbers also mean that more and more research is done by people on temporary contracts, who at the time they start their job already have to think about applying for the next one. Been there, done that. And I am afraid, this shifting of weight towards short-term thinking will have disastrous consequences for the fundamental research that gets accomplished, if it doesn't already have them.
In the context of setting wrong incentives and short-term thinking another interesting piece of data is Pierre Azoulay et al's study
- Incentives and Creativity: Evidence from the Academic Life Sciences
By Pierre Azoulay, Joshua S. Gra Zivin, Gustavo Manso
In their paper, the authors compared the success of researchers in the life sciences funded under two different programs, the Howard Hughes Medical Institute (HHMI), which "tolerates early failure, rewards long-term success, and gives its appointees great freedom to experiment" and the National Institute of Health (NIH), with "short review cycles, pre-defined deliverables, and renewal policies unforgiving of failure." Of course the interpretation of the results depends on how appropriate you find the used measure for scientific success, the number of high-impact papers produced under the grant. Nevertheless, I find it tale-telling that, after a suitable adjustment of researcher's average qualification, the HHMI program funding 5 years with good chances of renewal produces a better high-impact output than the NIH 3 year grants.
And speaking of telling tales, let me quote for you from the introduction of Azoulay et al's paper which contains the following nice anecdote:
"In 1980, a scientist from the University of Utah, Mario Capecchi, applied for a grant at the National Institutes of Health (NIH). The application contained three projects. The NIH peer-reviewers liked the first two projects, which were building on Capecchi's past research effeorts, but they were unanimously negative in their appraisal of the third project, in which he proposed to develop gene targeting in mammalian cells. They deemed the probability that the newly introduced DNA would ever fi nd its matching sequence within the host genome vanishingly small, and the experiments not worthy of pursuit.
The NIH funded the grant despite this misgiving, but strongly recommended that Capecchi drop the third project. In his retelling of the story, the scientist writes that despite this unambiguous advice, he chose to put almost all his efforts into the third project: "It was a big gamble. Had I failed to obtain strong supporting data within the designated time frame, our NIH funding would have come to an abrupt end and we would not be talking about gene targeting today." Fortunately, within four years, Capecchi and his team obtained strong evidence for the feasibility of gene targeting in mammalian cells, and in 1984 the grant was renewed enthusiastically. Dispelling any doubt that he had misinterpreted the feedback from reviewers in 1980, the critique for the 1984 competitive renewal started, "We are glad that you didn't follow our advice."
The story does not stop there. In September 2007, Capecchi shared the Nobel prize for developing the techniques to make knockout mice with Oliver Smithies and Martin Evans. Such mice have allowed scientists to learn the roles of thousands of mammalian genes and provided laboratory models of human afflictions in which to test potential therapies."