The Incredible Shrinking Atom: The trick is to take the electron in a hydrogen atom and replace it with a muon.

The Incredible Shrinking Atom: The trick is to take the electron in a hydrogen atom and replace it with a muon.  This is a particle 207 times heavier than an electron, but otherwise very similar.  Unfortunately a muon has a half-life of just 2 microseconds: then it decays into an electron and some other crud.  

Originally shared by John Baez

Miniature atoms

In The Incredible Shrinking Man, a guy exposed to radiation becomes smaller and smaller.   Eventually he realizes he’ll shrink forever – even down to subatomic size.  Of course that’s impossible.  But guess what: we can now make miniature atoms!

In fact we can make atoms almost like hydrogen, but 1/186 times as big across.  Unfortunately they only last 2 microseconds.  But that’s still long enough for them to form molecules, and for us to do chemical experiments with them.  Chemists have gotten really good at this stuff.

The trick is to take the electron in a hydrogen atom and replace it with a muon.  This is a particle 207 times heavier than an electron, but otherwise very similar.  Unfortunately a muon has a half-life of just 2 microseconds: then it decays into an electron and some other crud.  

Why is an ordinary hydrogen atom the size it is, anyway?  It’s the uncertainty principle.  The atom is making its energy as small as possible while remaining consistent with the uncertainty principle.  

A hydrogen atom is made of an electron and a proton.  If it were bigger, its potential energy would increase, because the electron would be further from the proton.  So, the atom “wants to be small”.  And without quantum mechanics to save it, it would collapse down to a point: The Incredible Shrinking Atom.

But if the atom were smaller, you’d know the position of its particles more precisely – so the uncertainty principle says you’d know their momentum less precisely.  They’d be wiggling around more wildly and unpredictably  So the kinetic energy would, on average, be higher.  

So there’s a tradeoff!  Too big means lots of potential energy.  Too small means lots of kinetic energy.  Somewhere in the middle is the best – and you can use this to actually calculate how big a hydrogen atom is!   

But what if you could change the mass of the electron?  This would change the calculation.  It turns out that making electrons heavier would make atoms smaller!  

While we can’t make electrons heavier, we can do the next best thing: use muons.

Muonic hydrogen is a muon orbiting a proton.  It’s like an atom, but much smaller than usual, so it does weirdly different things when it meets an ordinary atom.  It’s a whole new exotic playground for chemists.  

And, you can do nuclear fusion more easily if you start with smaller atoms!  It’s called muon-catalyzed fusion, and people have really done it.  The only problem is that it takes a whole lot of energy to make muons, and they don’t last long.  So, it’s not practical – it doesn’t pay off.  At least not yet.  Maybe we just need a few more brilliant ideas:

By the way: a while ago I talked about making a version of hydrogen where we keep the electron and replace the proton by a positively charged antimuon.  That’s called muonium.  Muonium is lighter than ordinary hydrogen but almost the same size, just a tiny bit bigger.  It’s chemically almost the same as hydrogen, except that it decays in 2 microseconds.  

With muonic hydrogen it’s the reverse: it’s a lot smaller, but it’s just a bit heavier.  It’s chemically very different from ordinary hydrogen.

Finally, for the übernerds:

If you do the calculation, you can show that the radius of a hydrogen-like atom is proportional to


where m is the mass of the lighter particle and M is the mass of the heavier one.  If we say an electron has mass 1, then a muon has mass 207 and a proton has mass 1836.  You can use this formula to see that muonic hydrogen has a radius 1/186 as big as ordinary hydrogen, while muonium has a radius 1.004 times as big.  

The authors talk about the increased availability of opioid painkillers, and the increased use of heroin within this…

The authors talk about the increased availability of opioid painkillers, and the increased use of heroin within this group, as a possible contributing factor, but it seems hard to ignore the ties between this rise and the collapse of the prospective economic futures of people in this group.

Originally shared by Yonatan Zunger

You may have seen this story circulating around the press: non-Hispanic whites in the US, aged 45-54, are dying at an alarming rate. I’m sad to say that, after going through the original research fairly carefully, they appear to have done a good job – the results are real, and telling.

First of all, a link: The research itself is available online at . It’s a very readable paper, and if you’re comfortable with the scientific literature, I encourage you to read it. The Washington Post article (linked below) is probably the best general-public summary so far.

Second, let me summarize what the research did and found. They looked at records of mortality and morbidity (M&M for short; morbidity in this case means medical conditions which significantly affect one’s ability to function in daily life) from the Centers for Disease Control (CDC), which study these things carefully, and dug into the statistics. What they found is that for all groups in the developed world, M&M has been steadily decreasing – with one notable exception.

Note that this doesn’t mean that all groups have good M&M rates: for example, the rate for black, non-Hispanic adults in the US is much worse than the rate for white, non-Hispanic adults, but that rate in 2013 is much better (almost 50% better!) than it was in 1998. Improvements have been happening across the board.

The one marked exception was white, non-Hispanic adults with less than a Bachelor’s degree. For this group, three particular sources of death have been surging since 1998: suicide, drug and alcohol poisoning, and chronic liver disease and cirrhosis. This surge has affected all age groups, and appears to affect men and women equally; but it affects people with less than a high school diploma the most, people with a high school diploma or some college significantly, and people with a college degree or more very little.

For people aged 45-54 in particular, this surge has been so high as to completely counter all other improvements in mortality. The graph below shows the annual death rate for various groups for that age range.

The scale of this effect is tremendous, corresponding to roughly half a million excess deaths during this 15-year period. That’s on the same scale as the US death toll from the AIDS epidemic, which claimed 650,000 lives from 1981 to 2015. And like with epidemic diseases, for every one person who dies, many more are sickened and their lives are impaired.

And given what appears to be a very rigorous analysis of the data, I think we have to accept that this result is real. 

The authors talk about the increased availability of opioid painkillers, and the increased use of heroin within this group, as a possible contributing factor, but it seems hard to ignore the ties between this rise and the collapse of the prospective economic futures of people in this group.

If I had to look at this for unexpected patterns besides the blindingly obvious, a few things strike me:

* Its limitation to the non-Hispanic white population is interesting, because most things that go horribly wrong will also hit the black and Hispanic population as well. The one notable exception is when things are already bad for those populations and don’t get any worse.

Interestingly, that exception appears to apply to the recent economic downturn. I recently wrote a post about the effects of redlining and economic policy on people’s wealth (, and one of the interesting things which showed up in the main data graph which drove that post (which that particular article didn’t spend too much time on) was how the Great Recession of 2007 had a huge effect on the net wealth of the white population, but very little on the Hispanic and black population. In no small part, that’s because the economic policies of previous decades left those populations with so little wealth (and so little housing wealth, in particular) that they had little left to lose in that recession. 

In fact, this sort of effect would synchronize well with the results of this new paper, since non-Hispanic white Americans with less than a college degree were (by all metrics) the ones most affected by the recent economic troubles: entire job sectors which this group dominated prior to this period, such as manufacturing, have essentially crashed and seem unlikely ever to recover, at least to the extent of providing quasi-middle-class existences to anyone.

* The gender balance of the effect was somewhat surprising to me. I would have guessed that a process like this would affect men more than women, as they are more likely to occupy the position of “breadwinner.” However, the effects of an economic crash will hit all of a family, and the male/female breadwinner ratio has been declining for decades, so apparently this is not a statistically significant difference.

* The extreme specificity of the causes of death which triggered this rise surprised me. I would expect that any rise in causes of death would be fairly broad, if nothing else because of the fraction of suicides misclassified as accidents or other causes of death. Apparently, this was not the case.

* Poisoning – that is, accidental or “intent undetermined” deaths from overdoses of alcohol, prescription, and illegal drugs – has surpassed lung cancer as a cause of death in this 45-54 group, and suicide is likely to do so within the next two or three years.

* Other things that you may expect to correlate with this shift in death rates, such as obesity, don’t. While people with a BMI over 30 have higher rates of all of the various morbidity and mortality types, they have seen the same rates of change as the greater population, and the change in rates of obesity itself contributed only a small amount to total health rate changes.

* Other countries in this study had similar economic problems, but none of them showed the same shift in M&M rates. The paper notes that these countries use different methods for retirement: defined-benefit pension plans, as opposed to the US, which has shifted largely to defined-contribution plans, which are much more vulnerable to stock market risk. While I think there isn’t enough data to strongly link these two (there are, after all, quite a few other differences between the countries), the reasons for these differences definitely bear further investigation.

So I don’t have any strong public policy recommendations here, except one: there is a real, severe, and lethal public health crisis spreading over the country, and we need to treat it as one. Further study is definitely needed to identify not simply the root cause, but the factors which make this so much more lethal for one group than for others. And we need to be ready to act seriously in order to stanch the bleeding.

Order is essential in the definition of multiplication because not all forms of multiplication are commutative, such…

Order is essential in the definition of multiplication because not all forms of multiplication are commutative, such as matrix multiplication. This is why it is taught as a separate property.

Originally shared by Science on Google+

Why Was 5 x 3 = 5 + 5 + 5 Marked Wrong?

It seems absurd at first glance: we all know that 5 x 3 is equal to 3 x 5, which is 15. But check out the formal definition of multiplication:

The multiplication of two whole numbers, when thinking of multiplication as repeated addition, is equivalent to adding as many copies of one of them (multiplicand, written second) as the value of the other one (multiplier, written first).

In other words, 5 x 3 = 3 + 3 + 3 + 3 + 3

Why should this matter? It matters because the term equal is not the same as equivalent. Although 5 x 3 is equal to 5 + 5 + 5 it is not equivalent to 3 + 3 + 3 + 3 + 3. Suppose you were buying chocolates for your sweethearts on Valentine’s Day. You would have 3 boxes of 5 chocolates each in one case, and 5 boxes of 3 chocolates in the other case. What you choose to buy depends on how many sweethearts you are trying to impress, right? 

Perhaps more importantly, the difference is also a fundamental concept in computer science. 

Notice that the second problem is marked incorrect as well. That’s because keeping rows and columns straight in matrix multiplication is important. As explained in the link below: “Order is essential in the definition of multiplication because not all forms of multiplication are commutative, such as matrix multiplication. This is why it is taught as a separate property.”

So, what do you think? Do you agree with the teacher or not?

What would happen if you dropped a billion grains of sand on top of each other and let them cascade into a stable…

What would happen if you dropped a billion grains of sand on top of each other and let them cascade into a stable pattern following a simple mathematical rule? Find out more below. (The answer is in the picture.)

Originally shared by Richard Green

The sandpile model with a billion grains of sand

This picture by Wesley Pegden shows an example of a stable configuration in the abelian sandpile model on a square lattice. This consists of a rectangular square array of a large number of pixels. Each pixel has one of four possible colours (blue, cyan, yellow and maroon) corresponding to the numbers 0, 1, 2 and 3 respectively. These numbers should be thought of as representing stacks of tokens, often called chips, which in this case might be grains of sand.

Despite its intricate fractal structure, this picture is generated by a simple iterative process, as follows. If a vertex of the grid (i.e., one of pixels) holds at least 4 chips, it is allowed to fire, meaning that it transfers one chip to each of its neighbours to the north, south, east and west. The boundary of the grid can be thought of as the edge of a cliff, meaning that any chips that cross the boundary will fall off and be lost. If no vertices can fire in a particular chip configuration, then the configuration is called stable. For example, the configuration in the picture is stable, because no pixel holds 4 or more chips.

One of the key theorems about this particular sandpile model is that any chip configuration will become stable after firing various vertices a finite number of times. More surprisingly, the ultimate stable configuration obtained does not depend on the order in which the vertices were fired. The irrelevance of the order in which the vertices are fired is why the model is called “abelian”.

If we start with 2^{30} chips, all placed on the same pixel, and we then perform firings repeatedly until a stable configuration is reached, the resulting stable configuration is the one shown in the picture. (The number 2^{30} is just over a billion.) It is clear from the symmetrical nature of the firing rules that the resulting picture will be symmetric under rotation by a right angle or mirror-reflection, but the fractal-like structures are much more surprising.

Relevant links

Wesley Pegden is an Assistant Professor of Mathematics at Carnegie Mellon University. His home page includes an interactive zoomable version of this image:

The page also allows you to generate corresponding images on other lattices, including a triangular lattice with six colours, and a hexagonal lattice with three colours. You can also change the number of chips, like Dr Evil from Austin Powers. (One billion chips. No, one million chips.)

The article The Amazing, Autotuning Sandpile by Jordan Ellenberg appeared in Nautilus in April 2015:

The article discusses this picture, and also includes some details from it, including a close-up of the centre.

I have posted about the sandpile model before, here:

The other post includes more technical details, and describes how to construct a group out of certain sandpiles. A surprising feature of this is that the identity element of the group has a very complicated appearance.

David Perkinson is a Professor of Mathematics at Reed College. He has a gallery of images relating to abelian sandpiles:

(Found via Cliff Pickover (@pickover) on Twitter.)

#mathematics #scienceeveryday

The center of a black hole is not so much a place as  a moment in time. It is literally as inescapable as tomorrow.

The center of a black hole is not so much a place as  a moment in time. It is literally as inescapable as tomorrow.

Originally shared by Jonah Miller

Falling into a black hole with a flashlight.

It’s a common misconception that, because an observer falling into a black hole appears to stop, time stops for that observer. That’s not really true.

Think of it this way. The event horizon is the point where light cannot escape the black hole, right? Well, suppose I jump into a black hole carrying a flashlight and you watch. As I get closer to the event horizon, the light rays from my flash light will have a harder and harder time getting away from the black hole to your eyes.

Eventually, they won’t be able to get to your eyes at all… after I pass the event horizon. But, the instant before I fall in, the light rays will take a huge, but finite time to reach you. 

The effect is that, for the age of the universe, you will see the light rays I emitted just before I passed the event horizon. And I will appear to have stopped.

Past the event horizon

Although to outside observers, I appear to have stopped. I may not actually notice anything strange as I pass the event horizon (depending on the size of the black hole). I will eventually be torn apart by tidal forces, but for a large black hole, these are weak near the event horizon.

From my perspective, I’m just travelling through space, and I experience time somewhat normally. This is the difference between proper time and coordinate time. Proper time is the time experience by a person. Coordinate time is just a label.

However, one strange thing will happen to me inside the black hole… and that is that the singularity, the center of the black hole, is irrevocably in my future. The center of a black hole is not so much a place as  a moment in time. It is literally as inescapable as tomorrow.

This post was inspired by a question on the Science on Google+ community by Dustin Thurston . Original post here:

Image is a simulation of gravitational of the milky way by a black hole. (Said black hole does not exist.) The creator, Ute Krauss, has a 

lot of great relativity visualizations. You can find them here:

Source: The description I just gave can be found in most general relativity textbooks. For a free treatment, I recommend Sean Carroll’s lecture notes, published online:

#physics   #astrophysics   #science  

Finally, as noted in the press release (see the first link in this post), the researchers have not patented their…

Finally, as noted in the press release (see the first link in this post), the researchers have not patented their technology as they want to make it freely available to everyone who needs access to the improved varieties of cassava.

Originally shared by Robert Woodman

Engineering Cassava to Combat Vitamin B6 Deficiency

Press release: (free)

Original research note: (paywall)

Vitamin B6 is an essential nutrient for humans. It is required for numerous biochemical processes in the human body, and deficiency in this vitamin is associated with numerous pathological conditions, including cardiovascular disease, diabetes, various neurological diseases, and nodding syndrome (NS), which is a childhood condition found rather commonly in eastern Africa in areas where vitamin B6 deficiency is endemic. (1,2)

A recent publication in Nature Biotechnology by a multinational group of scientists has found a possible way to solve vitamin B6 deficiency through genetic engineering of cassava. Cassava, also called Brazilian arrowroot, manioc, tapioca, and yuca (not the same as the unrelated plant known as yucca) is a New World plant that has become an important dietary staple throughout the tropical and subtropical world (3). Cassava is as important to African farmers as rice is to Asian farmers or as wheat and potatoes are to European farmers (4, citing a personal communication). Various researchers have suggested for some time that genetic engineering of cassava could be used to ameliorate malnutrition and dietary deficiencies (2, 4-7). The paper in Nature Biotechnology reports the successful engineering of cassava to produce vitamin B6 (1).

The enzymes PDX1 and PDX2 are needed to synthesize vitamin B6 in plants. Genes encoding PDX1 and PDX2 were taken from the plant Arabidopsis thaliana (8) and modified to put them under control of one of two promoters. One promoter (CaMV35S) allowed PDX1 and PDX2 to be expressed throughout the entire cassava plant, while the other promoter (Patatin) enhanced expression of the two genes in the cassava roots. Engineered cassava plants from each group, named 35S and PAT (based on the promoter that was used), were then grown from tissue culture in a greenhouse. Evaluation of the resulting plants showed no significant morphological differences but did show a large increase for vitamin B6 expressed in the plants’ leaves and roots (35S) or in the roots (PAT). The amount of vitamin B6 expressed in the transgenic leaves was increased from 3.9-fold to 48.2-fold over wild-type cassava, while the amount of vitamin B6 expressed in transgenic roots increased 1.9- to 5.8-fold over wild type cassava. Evaluation in a test field in Shanghai, China, showed that the genetically engineered plants were stable when grown in wild-type conditions.

One significant difference between engineered and wild-type cassava did emerge in the study. Using high-performance liquid chromatography (HPLC), the research group established that the vitamin B6 that accumulated in the engineered cassava plants’ leaves and roots was mostly in the unphosphorylated form. Only the phosphorylated esters of vitamin B6 are active in the body, but the unphosphorylated forms of vitamin B6 are more stable to storage and to heating. Cassava is typically boiled before eating to remove toxic compounds known as cyanogens, and quite a bit (15%, up to 75%) of the vitamin B6 in cassava can be lost due to boiling (9, see also 1, at page 1031). Thus, having a cassava plant with enhanced vitamin B6 production means that more vitamin B6 will be available to the eater after the plant is cooked.

Finally, the authors examined the bioavailability of the vitamin B6 produced by the genetically engineered plants and found that the vitamin B6 produced by the transgenic cassava was highly available to be absorbed by the consumer of the cassava. Indeed, the authors noted that “[u]sing bioavailable ‘vitamin B6 equivalents’, we calculated that the vitamin B6 recommended dietary allowance for an adult person (1.3 mg/day) would be reached with 51 g of boiled 35S-5 leaves or 505 g (~1.7 lb) of boiled PAT-12 storage roots” (1, at page 1031).

This paper shows that cassava, an important dietary staple, can be genetically engineered to produce more vitamin B6. The increased amounts of vitamin B6 will help alleviate nutritional deficiencies in Africa, improving the health and well-being of people who depend on cassava as a key component of their diet. Further, other modifications to cassava are possible to further improve the nutritional quality of this important plant. Finally, as noted in the press release (see the first link in this post), the researchers have not patented their technology as they want to make it freely available to everyone who needs access to the improved varieties of cassava. The groups involved in this research are now working with African scientists to try and introduce this modified cassava to African farmers.


(1) Kuan-Te Li, et al. Increased bioavailable vitamin B6 in field-grown transgenic cassava for dietary sufficiency. Nature Biotechnology 33, 1029–1032 (2015), doi:10.1038/nbt.3318. (paywall)

(2) Ian S. Blagbrough, Soad A.L. Bayoumi, Michael G. Rowan, and John R. Beeching. Cassava: An appraisal of its phytochemistry and its biotechnological prospects. Phytochemistry 71, 1940–1951 (2010), doi:10.1016/j.phytochem.2010.09.001 

(3) Cassava. (2015, October 28). In Wikipedia, The Free Encyclopedia. Retrieved 20:23, November 1, 2015, from

(4) Montagnac, J. A., Davis, C. R. and Tanumihardjo, S. A. Nutritional Value of Cassava for Use as a Staple Food and Recent Advances for Improvement. Comprehensive Reviews in Food Science and Food Safety, 8, 181–194 (2009). doi: 10.1111/j.1541-4337.2009.00077.x. #openaccess paper available at 

5. Martina Newell McGloughlin. Modifying agricultural crops for improved nutrition. New Biotechnology 27(5), 494-504 (November 2010). Available at 

6. Teresa B. Fitzpatrick, et al. Vitamin Deficiencies in Humans: Can Plant Science Help? The Plant Cell, 24, 395–414 (February 2012). #openaccess available at 

7. Hervé Vanderschuren, et al. Strategies for vitamin B6 biofortification of plants. Front Plant Sci. 4, 143 (May 2013). Available at 

8. Arabidopsis thaliana. (2015, October 27). In Wikipedia, The Free Encyclopedia. Retrieved 20:42, November 1, 2015, from

9. A.Paula Cardoso, et al. Processing of cassava roots to remove cyanogens. Journal of Food Composition and Analysis, 18(5), 451-460 (August 2005). Available at and at

A healthy ocean supports a healthy ocean economy – and a healthy human population.

A healthy ocean supports a healthy ocean economy – and a healthy human population.  In the author’s own words “The Oceans Act and subsequent strategy thus incorporated some of the best available practices, supported by science (both natural science and social science)”.  So what went wrong?

Originally shared by Samantha Andrews

Oh Canada – what about your ocean?

This is a big post.  It’s about big things.  Important things too.  It deals with Canada – a big country.  Actually by area, it is the second largest country in the world.  It also has a lot of ocean under its jurisdiction.  Take a look at the website of Fisheries and Oceans Canada, a Federal government body, and you will see statements like this:

“The Government of Canada is working to ensure the future health of Canada’s oceans and ocean resources by increasing understanding and protection of our oceans; supporting sustainable economic opportunities; and demonstrating international leadership in oceans management”

Sounds good doesn’t it.  The Canadian Federal Government (which has just changed as of yesterday – see bottom of the post) have a number of Acts in place to govern the bit of the ocean they have claimed as theirs.  Great stuff!  Except maybe, as demonstrated in a recently published paper, authored by 19 Canadian scientists including lead-author Megan Bailey (Dalhousie University),  ” over the past decade decision-making at the federal level appears to have undermined the government’s own mandates for the sustainable management of Canada’s oceans

The scientists focus on three key Federal Acts – the Oceans Act, brought into force in 1997, the Fisheries Act which started in 1868 but has undergone a number of revisions since, and the Species at Risk Act (SARA) which was introduced in 2002.

The Oceans Act – A global leader in ocean management

When Canada’s Oceans Act came in, it was heralded as a model.  It sought to bring in an integrated management framework in which ecosystem based management and marine protected areas were high on the list of priorities.  A healthy ocean supports a healthy ocean economy – and a healthy human population.  In the author’s own words “The Oceans Act and subsequent strategy thus incorporated some of the best available practices, supported by science (both natural science and social science)” .  So what went wrong?

In a word… implementation. 

The paper highlights several points.  For example, in 2005 the Auditor General of Canada issued a report stating that “Fisheries and Oceans Canada has fallen far short of meeting commitments and targets for implementing key aspects of the Oceans Act” .  In 2012, a follow-up evaluation of the Integrated Ocean Management Program indicated the situation had not improved, with science, engagement of stakeholders, designation of marine protected areas, protected of marine ecosystems, and integrated ocean management planning needing much more work from Federal government. “Canada’s marine biodiversity remains at risk” the Auditor General’s report read… “By extension, the prosperity of many coastal communities in Canada with marine-based economies also remain threatened” 

The Fisheries Act

As the name suggests, the Fisheries Act is the primary piece of legislation through with fisheries in Canadian waters are managed by the Federal government.  I am sure that many of you are aware of the collapse of the cod fisheries in the northwest Atlantic, but the paper highlights something much more recent.  Habitat protection – or more accurately the lack of it.  In particular the paper picks up on one change brought into force in 2013.  The Act used to read “No person shall carry on any work or undertaking that results in the harmful alteration, disruption, or of destruction of fish habitat”.   Now the Act reads “No person shall carry on any work, undertaking or activity that results in serious harm to fish that are part of a commercial, recreational or Aboriginal fishery, or to fish that support such a fishery .  Spot the difference?  Why is this a problem?  The paper highlights a number of reasons – here is a quick overview:

– Only fish with a current value to a fishery is offered protection.  Other fish are excluded (even though they may have a fishery value in the future).

– Protection for fish in remote locations where there are no fisheries is now non-existent.

– Harm has to be proven not disproven.  This is a move away from the precautionary approach.  It is more difficult to prove harm than prove you aren’t doing harm, especially as government research focuses on a small number of fished species in the Atlantic and Pacific.  Arctic regions are largely ignored.

– Species must be shown to have current value to a fishery.  For species taken directly by fishers, this is possible but what about ‘fish that supports such a fishery’?  Now we are talking about having knowledge of the ecosystem, and food-web ecology.  We have some, but not much…and remember research only focuses on a few species – and we are losing historical information.

Species At Risk Act (SARA)

If you are an individual of a species at risk in Canada and you are lucky enough to be listed under SARA you will get two protections, straight off the bat.  You and the rest of your species cannot be killed or collected.  Your “residence” also receives protection – damage or destruction of it is forbidden.  The department looking after you (in the case of marine species that’s Fisheries and Oceans Canada) are charged with developing legal measures to ensure your protection, and develop an action plan and a recovery strategy.  That is if you get listed.  If you are a commercially important marine species, then you might find that the Federal government is not so willing to have you under SARA, despite scientific evidence being collected and your application recommended by an independent body (Committee on the Status of Endangered Wildlife in Canada – COSEWIC).  Some 39 marine species have put forward for listing by COSEWIC.  Just 5 have been accepted.  According to Federal government, the Fisheries Act takes care of marine species well enough so they don’t need to be under SARA.  Unfortunately as the paper highlights, this isn’t the case.  Reassessments of marine species by COSEWIC show that may are not recovering as they should.

This is not mentioned in the paper but I think it is somewhat… amusing (insert sarcastic face here).  The Grey Whale ( Eschrichtius robustus) population in the Atlantic has SARA protection.  Lucky whale! Except that humans sort of (by which I mean actually) killed the Atlantic population off a few hundred years ago.  Don’t worry though – a recovery strategy was looked at by Fisheries and Oceans, but alas they concluded that recovery was not feasible.  

No science here please, we’re the Canadian Government

The trouble for Canada’s ocean waters doesn’t just end there.  The scientists also note what is popularly known as the muzzling of government scientists – where scientists cannot speak freely to the public, is just part of the problem.  Funding cuts, closing of libraries, and destruction of archived material, they argue, are limiting the capacity of government scientists to conduct science.  How bad is the situation?  Here are some stats on what the (until very recently) previous government have been up to:

– 7 out of 11 Fisheries and Oceans Canada libraries have  been closed in the past 10 years

– A third of the library collections (including unique documents) have been ‘culled’

– 35% of funding for biodiversity programs has been cut

– 42% of funding for pollution management/mitigation programs (terrestrial and marine) has been cut

– C$100 million is due to be cut from Fisheries and Oceans Canada over the next 3 years

– Press releases issued by Federal science departments have declined by 58% 

These concerns have been echoed by others.  Some now ex-government scientists have been quite vocal about the muzzling of scientists in particular – like Steve Campana, who was until recently a biologist with Fisheries and Oceans Canada  ( Others have taken more general views of data loss in Canada.  In September, writer Anne Kingston wrote about “Vanishing Canada: Why we’re all losers in Ottawa’s war on data” (  

Open Access Paper

The paper was published in the open access journal Marine Policy.  The authors have paid for the paper to be open access, so why not have a read of it yourself

* UPDATE:  I wrote this post last week but didn’t get a chance to put it online.  On Monday Canada went through their elections and the Conservative Government, who implemented all these changes went out and the Liberals came in.  The Liberals have already made some pledges to do better for the ocean ( but of course we will have to wait and see what happens.  Public support for the oceans will be crucial if the new government is to act.

The Image A gray whale and her calf migrate north along the California coast on their way to summer feeding grounds in the Arctic.  Unlike their SARA protected relatives that used to live in the Atlantic, grey whales can still be found in the Pacific.  Credit NOAA/Flickr (CC BY-NC-ND 2.0)


#science #sciencesunday #marinescience #marineconservation #Canada  

So in my mind the axis is caused both by imperfect experiments and analysis and by the human need to find patterns…

So in my mind the axis is caused both by imperfect experiments and analysis and by the human need to find patterns in everything.

Originally shared by Jonah Miller

The CMB Axis of Evil and the Nature of Randomness

This Halloween, Nature News & Comment  released an article titled Zombie Physics: 6 Baffling Results that Just Won’t Die. It’s a fun article describing several mysteries in physics whose solution sits in a sort of limbo.

For fun, I figured, I’d explain some of these mysteries, and give my opinion about possible solutions. I’ll tag them under #ZombiePhysics .  First, I’m going to discuss the CMB Axis of Evil, a strange pattern in the leftover radiation from the Big Bang.

(Those of you who wish to read my post in blog form can do so here:

A Much-Too-Short Summary of Cosmic Inflation and the CMB

About 13.8 billion years ago, the universe was extremely hot, so hot that matter couldn’t form at all… it was just a chaotic soup of charged particles. Hot things (and accelerating charges) glow. And this hot soup was glowing incredibly brightly. As time passed, the universe expanded and cooled, but this glow remained, bathing all of time and space in light.

(The reason for why the universe was so hot in the first place depends on whether cosmic inflation is true. Either it’s because the Big Bang just happened or it’s because, after cosmic inflation, a particle called the inflaton dumped all of its energy into creating hot matter.)

Even today, the glow remains, filling the universe. As the universe expanded, the glow dimmed and its light changed colors (due to gravitational redshift, see:, until it became microwaves instead of visible or ultraviolet light. This ubiquitous glow is called the Cosmic Microwave Background, or CMB [1] for short, and if you turn an old analogue TV to an unused channel, some of the static you hear is CMB radiation picked up by your TV antenna [2].

Since its discovery, the CMB has been one of our most powerful probes of cosmology. It lets us accurately measure how fast the universe is expanding [3], the relative amounts of normal stuff vs dark energy and dark matter [4], how the density of matter fluctuated in the early universe [5], how the Earth is moving relative to the expansion of the universe [6], and much more.

Some parts of the early universe were more dense and some were less, and this translates to slight, random variation in the color of light in the CMB. And in turn, we can translate this into a temperature. The temperature of the CMB is incredibly consistent across the sky. It’s an almost perfect 2.725 Kelvin. However, there are tiny fluctuations relative to this mean, and these reflect the dynamics of the early universe. Figure 2 shows a map of these fluctuations and I describe how this map is attained in my post on BICEP2:

The CMB Axis of Evil

It’s very hard to see in figure 2, but with a little massaging, we can see that many of the fluctuations in the CMB align along a single axis, called the axis of evil, as shown in figure 1. (Formally, the quadrupole and octopole moments of the fluctuations align.) At first glance, is quite strange, because we believe that the fluctuations in the density of the early universe should be randomly distributed in a particular way… and this is exactly the way they are distributed on smaller scales. The mottled look of figure 2 is exactly due to this particular random behaviour of the fluctuations in the CMB.

So what’s going on? There are a couple of possibilities. I’ll go over them and add my opinion (and the scientific consensus or lack thereof).

Errors in Foreground and Modelling

Perhaps the most boring explanation is that we made a mistake when creating the CMB maps like figure 1 and figure 2. As the story of BICEP2 [7] shows, making those maps is very hard. To create them, we have to account for all the other sources of microwave radiation in the universe and carefully remove them from our measurements.

Over time, we’ve gotten incredibly good at this…so good that we can extract all sorts of information about the early universe from the CMB. But that doesn’t mean we’re always right. There could be extra dust in the solar system [8]. Or a confluence of the gravitational pull of distant galaxies on the light of the CMB (called the integrated Sachs-Wolfe effect) could magnify a normal random fluctuation so that it appears significant [9].

(I am really oversimplifying the integrated Sachs-Wolfe effect here. But that’s a story for another time.)

I think errors in foreground modelling could easily account for the axis of evil.

The Universe is a Doughnut or a Sphere

Imagine an ant living on the surface of a doughnut. The ant is so small that the doughnut appears flat to it. As the ant travels forward, it will eventually return to where it started, no matter what direction it travelled. From our perspective, of course, this is because a doughnut wraps around. But to the ant, this would be quite mysterious! Figure 3 shows the doughnut from both our perspective and the ant’s perspective. This is very similar to how if you travel East on the Earth, you eventually return to your starting place.

What if our universe was like the doughnut, but in three dimensions? So if you start going in a direction, say towards Andromeda, and keep going for as long as possible, billions of light years, you would eventually get back to where you started (ignoring of course that the universe is expanding and thus the distance you would have to travel would increase faster than you could travel it).

What if, perhaps we see the same things on both sides of the axis of evil because they are literally the same things and the universe has wrapped around on itself? In the original paper discussing the axis of evil [10], the authors discuss this very possibility. It’s a nice idea, but it can actually be tested by trying to match images of stars and galaxies (and fluctuations in the cosmic microwave background) on opposite sides of the sky to see if they look the same. The results, however, are not favourable. So no one takes this idea very seriously… even though it’s very clever.

Cosmic Variance

This one takes a bit of explanation. So bear with me. First, let’s talk about something called a posteriori statistics.

A Posterioiri Statistics

Imagine a teacher breaks her students into two groups. She tells one group to flip a coin ten times and record the result as a sequence of heads or tails. The group might record, for example,


which would correspond to a string of four tails, then a string of four heads, then one head, and one tail. She tells the other group of students to make up ten coin flips, but try to do so in a way that looks random. The two collections the students return are:




And, masterfully, the teacher immediately picks out the truly random sequence.  Which one is it? How does she do it? The second sequence, TTHHHHTHTH, which looks very structured, is the random one.

The human mind is very good at picking out patterns, and attributes a cause to every pattern it sees. But random numbers, very naturally, randomly in fact, appear to make patterns, even though the pattern doesn’t mean anything. It’s just random noise. The teacher takes advantage of this. She knows her students will avoid creating a sequence that looks too structured, because they don’t think random numbers look like that. But random numbers can easily look like that.

Of course, the probability that precisely the second sequence would emerge is less than one percent. But the emergence of some sequence that looks vaguely like the second sequence is vastly more likely.  You can think of this like finding a cool looking cloud, or Jesus in your morning toast. You see the cool looking cloud and you think “Wow! A cloud that looks like an airplane! What are the odds?” But you should be thinking “Wow! A cloud that looks like an airplane! The odds of me finding a cloud that looks like something interesting are quite high because there are a lot of clouds and a lot of things I think are interesting.”

This sort of thinking is called a posteriori statistics. And in general, it causes mistaken analysis.

The CMB Axis of Evil

So what does this have to do with the CMB? Well, people who study the CMB are well aware of the danger of a posteriori statistics, so they try to avoid thinking in this way. One way to avoid this sort of thinking is to make many many measurements. If you have a huge number of sequences of coin flips, on average, the randomness (or lack thereof) will become manifest.

And this is indeed what we do for most of the cosmic microwave background. The fluctuations on small scales, which give figures 1 and 2 their mottled texture, are numerous and we can do many statistics on them by looking at different areas of the sky.

But the axis of evil is different. It covers almost the whole sky. And we only have one sky to make measurements of! So it’s not possible to do good statistics. The fact that we have only one universe to measure, which we believe emerged from random processes, and that we can’t do statistics on a whole ensemble of universes is called cosmic variance.

And cosmic variance interferes with our ability to avoid a posteriori statistics. It lets us fool ourselves into believing that the way our universe turned out is special, when there may in fact be a multitude of equally probable ways our universe could have been. And it is entirely possible that the axis of evil is one such “fluke.”

It is possible, in principle, to reduce the effects of cosmic variance. If we could move to another position in the universe, we would be able to see a different portion of the CMB (because the light that could have reached us since the CMB was created would come from a different place in the universe). In 1997, Kamionkowski and Loeb [11] suggested using the emissions of distant dust to extrapolate what the CMB looks like to that dust. In principle, it would be possible, but very very hard, to use this trick to test whether or not the axis of evil comes from cosmic variance.

As you may have guessed from the amount of time I devoted to the explanation, I find cosmic variance to be a very compelling cause of the axis of evil.

The Most Likely Story, In My Opinion

So… what do I think is the cause of the axis of evil? The following is my opinion and not rigorous science. But it went something like this. Due to random fluctuations in the way the universe could have been, something that looks like the axis of evil formed in the CMB, but much less significant. This would be the cosmic variance explanation. To this day, the “axis of evil” remains statistically insignificant. But, because our models of cosmic microwave sources and filters look like in the universe and in our solar system are flawed, and because we don’t take the integrated Sachs-Wolfe effect into account, the axis of evil appears much bigger to us than it actually is.

So in my mind the axis is caused both by imperfect experiments and analysis and by the human need to find patterns in everything.


I owe a huge thanks to my friend and colleague, Ryan Westernacher-Schneider, who told me this story last spring and compiled a summary and list of references. Ryan basically wrote this blog post. I just paraphrased and summarized his words.

Further Reading

I’m not the first science writer to cover this material. Both Ethan Siegel  and Brian Koberlein  have great articles on it. Check them out:

1. This is Brian Koberlein’s article:

2. This is Ethan Siegal’s:

For those of you interested in reading about the axis of evil in more depth. Here are a few resources.

1. This is the first paper to discuss the axis of evil. It also discusses the possibility that the universe is a doughnut:

2. This paper coined the term “axis of evil.”

3. This paper discusses the possibility of solar-system dust producing the axis of evil:

4. This paper discusses the integrated Sachs-Wolfe effect and how it enhances the axis of evil:

5. This paper proposes a way of reducing cosmic variance:

6. This is the collected published results by the Planck collaboration, which analyses all aspects of the CMB in great depth:


1. You can read the nature article on zombie physics here:

2. I also wrote a post about the Nature article here:

Related Reading

If you enjoyed this post, you might enjoy my other posts on cosmology. I wrote a two-part series on the BICEP2 experiment:

1. In the first article, I describe what BICEP2 was claiming to observe:

2. In the second article, I describe how measurements of the CMB are made and what went wrong with BICEP2:

I have three-part series on the early universe:

1. In the first article, I describe the evidence for the Big Bang:

2. In the second article, I describe problems with the Big Bang theory:

3. And in the third article, I describe cosmic inflation and how it fixes the problems we had with the Big Bang:

I have a fun article that describes the cosmic microwave background as the surface of an inside-out star:













#Science #ScienceSunday #ZombiePhysics   #physics #cosmology #CMB #axisofevil ScienceSunday 

For universities and colleges, there is no clear right or wrong choice on fossil fuel divestment, despite what…

For universities and colleges, there is no clear right or wrong choice on fossil fuel divestment, despite what activists might insist. Each institution must weigh and consider its own unique constituencies and the strategies by which it can make the biggest difference on climate change.

Originally shared by Gaythia Weis

MIT and Fossil Fuel Corporations:  Engagement or Pandering?

Calls to divest from Fossil Fuel stocks are simplistic, because simply closing ones eyes, ears and nose to these corporate entities does nothing to reduce their power, and does nothing to leverage University knowledge and other assets to further change.  On the other hand, alliances with Fossil Fuel corporations, which are not done with an honest openness and true exchange on the part of both sides, are simply, in my opinion, greed pandering to big money.

I think that the key questions to ask come from this Guardian article:

““MIT seeks to convene key players with the goal of helping drive significant progress for the world. There is a great deal to do and we are eager to get started.” “

“Its plan calls for eight new “low-carbon energy centers” that will work with companies to develop technologies focusing on solar energy, nuclear fusion, energy storage and other initiatives. Each center will seek about $8m in annual funding over five years, totaling more than $300m. “

Who are the “key players” that MIT will partnership with, and by so doing, use their knowledge base to promote?  By seeking substantial funding for these centers, to whom will MIT be beholden?

In my opinion, the answers to these questions will serve to shape, in substantial measure, how US energy policy proceeds.  Which, I sadly believe, will mean that the trajectory may be more in the direction of greatest available funding than best available science.

It seems to me that this deserves deeper discussion as to what constitutes engagement.  This needs to be a two way partnership.  And when talking about corporations, it needs to be recognized that these institutions are financial in nature, and it is largely money that does the talking.

In the first place, universities are all about the expansion and transfer of knowledge.  Fossil fuel companies, on the other hand, are now known not only to have hidden knowledge which they had collected regarding climate change, but also to have deliberately acted to mislead the public to form false conclusions. Conclusions that aided in these corporations to continuing business as usual, to the serious detriment of the planet.  Step one for a knowledge institution would be to join in demands for full release of these climate studies, and for prosecution of the executives involved who may have made or caused their corporations to take actions to make false statements to the public.

Furthermore, large endowment Universities have funds that actually economically position themselves comparable to hedge funds operators.  Monies in a similar situation, such as the California Retirement Fund, CalPers, has acted, at least at times, with the knowledge that how they chose to invest,or not to invest, has great influence on societal outcomes.

Here, as I see it, MIT seems unwilling to risk its own wealth as leverage to further change, but rather is acting as essentially a junior partner (money wise) to even greater wealth, in an effort to collect even more wealth for themselves.  

Of course we want to promote research into breakthrough energy technologies.  But these need to be truly innovative, and not tied to propping up the very corporations that have brought us to this potentially disastrous climate brink.   New technology is needed, but to be used effectively, it also needs to be placed in new hands.

The problem for academic research institutions is lack of adequate funding.  Universities need more funding such as that through established institutions, such as the National Science Foundation, that are peer reviewed and not fraught with corporate conflict of interest.

Corporations, by keeping much of their wealth offshore, deplete our Federal coffers.  At the same time they have long relied on US government research for much of the basic research underlying the scientific and technological foundations of their industry.  The basic research into hydraulic fracturing, which was done by the United States Geological Survey is a case in point.

If MIT wants to be an innovative Climate Change leader, they need to ensure that they are acting in partnership with others who are themselves acting in good faith.

Ostracod Fireworks

Ostracod Fireworks

When an ostracod is swallowed, it emits a burst of light, making the cardinal fish spit it out.

“They’ve been called fish fireworks, and their glowing displays are like nighttime light shows on the water. Ostracods are a very old species of crustacean with a trait called bioluminescence. That’s a fancy way of saying they light up, like fireflies. But unlike fireflies, ostracods have extracellular bioluminescence. They shoot light out of their bodies and into the water. The behavior is part mating ritual, part defense mechanism.

There are ostracods in every world ocean that are luminescent, but only in the Carribbean do you find these ones that have these complex patterns, and it’s probably related to the closing of the Panama Isthmus about 3 million years ago. It could startle their predators visually, or it could actually bring in the predator of their attacker, which is called the burglar alarm effect.

In this gif the tiny ostracod is almost eaten by the much larger fish, but the cardinal fish spits the ostracod out once the ostracod begins to emit light. Exactly why this causes the cardinal fish to spit the ostracods isn’t known, but there are theories.”

✪ Source:

✪ Watch the BBC short clip here:

✪ And some info about Crustaceans:

✪✪✪ HT: Magnus Fahlén  Thanks! ✪✪✪