Latest Posts

Universe Got Its Bounce Back

Universe Got Its Bounce Back

Humans have always entertained two basic theories about the origin of the universe. “In one of them, the universe emerges in a single instant of creation (as in the Jewish-Christian and the Brazilian Carajás cosmogonies),” the cosmologists Mario Novello and Santiago Perez-Bergliaffa noted in 2008. In the other, “the universe is eternal, consisting of an infinite series of cycles (as in the cosmogonies of the Babylonians and Egyptians).” The division in modern cosmology “somehow parallels that of the cosmogonic myths,” Novello and Perez-Bergliaffa wrote.

In recent decades, it hasn’t seemed like much of a contest. The Big Bang theory, standard stuff of textbooks and television shows, enjoys strong support among today’s cosmologists. The rival eternal-universe picture had the edge a century ago, but it lost ground as astronomers observed that the cosmos is expanding and that it was small and simple about 14 billion years ago. In the most popular modern version of the theory, the Big Bang began with an episode called “cosmic inflation” — a burst of exponential expansion during which an infinitesimal speck of space-time ballooned into a smooth, flat, macroscopic cosmos, which expanded more gently thereafter.

With a single initial ingredient (the “inflaton field”), inflationary models reproduce many broad-brush features of the cosmos today. But as an origin story, inflation is lacking; it raises questions about what preceded it and where that initial, inflaton-laden speck came from. Undeterred, many theorists think the inflaton field must fit naturally into a more complete, though still unknown, theory of time’s origin.

But in the past few years, a growing number of cosmologists have cautiously revisited the alternative. They say the Big Bang might instead have been a Big Bounce. Some cosmologists favor a picture in which the universe expands and contracts cyclically like a lung, bouncing each time it shrinks to a certain size, while others propose that the cosmos only bounced once — that it had been contracting, before the bounce, since the infinite past, and that it will expand forever after. In either model, time continues into the past and future without end.

With modern science, there’s hope of settling this ancient debate. In the years ahead, telescopes could find definitive evidence for cosmic inflation. During the primordial growth spurt — if it happened — quantum ripples in the fabric of space-time would have become stretched and later imprinted as subtle swirls in the polarization of ancient light called the cosmic microwave background. Current and future telescope experiments are hunting for these swirls. If they aren’t seen in the next couple of decades, this won’t entirely disprove inflation (the telltale swirls could simply be too faint to make out), but it will strengthen the case for bounce cosmology, which doesn’t predict the swirl pattern.

Already, several groups are making progress at once. Most significantly, in the last year, physicists have come up with two new ways that bounces could conceivably occur. One of the models, described in a paper that will appear in the Journal of Cosmology and Astroparticle Physics, comes from Anna Ijjas of Columbia University, extending earlier work with her former adviser, the Princeton professor and high-profile bounce cosmologist Paul Steinhardt. More surprisingly, the other new bounce solution, accepted for publication in Physical Review D, was proposed by Peter Graham, David Kaplan and Surjeet Rajendran, a well-known trio of collaborators who mainly focus on particle physics questions and have no previous connection to the bounce cosmology community. It’s a noteworthy development in a field that’s highly polarized on the bang vs. bounce question.

The question gained renewed significance in 2001, when Steinhardt and three other cosmologists argued that a period of slow contraction in the history of the universe could explain its exceptional smoothness and flatness, as witnessed today, even after a bounce — with no need for a period of inflation.

The universe’s impeccable plainness, the fact that no region of sky contains significantly more matter than any other and that space is breathtakingly flat as far as telescopes can see, is a mystery. To match its present uniformity, experts infer that the cosmos, when it was one centimeter across, must have had the same density everywhere to within one part in 100,000. But as it grew from an even smaller size, matter and energy ought to have immediately clumped together and contorted space-time. Why don’t our telescopes see a universe wrecked by gravity?

“Inflation was motivated by the idea that that was crazy to have to assume the universe came out so smooth and not curved,” said the cosmologist Neil Turok, director of the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, and co-author of the 2001 paper on cosmic contractionwith Steinhardt, Justin Khouryand Burt Ovrut. In the inflation scenario, the centimeter-size region results from the exponential expansion of a much smaller region — an initial speck measuring no more than a trillionth of a trillionth of a centimeter across. As long as that speck was infused with an inflaton field that was smooth and flat, meaning its energy concentration didn’t fluctuate across time or space, the speck would have inflated into a huge, smooth universe like ours. Raman Sundrum, a theoretical physicist at the University of Maryland, said the thing he appreciates about inflation is that “it has a kind of fault tolerance built in.” If, during this explosive growth phase, there was a buildup of energy that bent space-time in a certain place, the concentration would have quickly inflated away. “You make small changes against what you see in the data and you see the return to the behavior that the data suggests,” Sundrum said.

However, where exactly that infinitesimal speck came from, and why it came out so smooth and flat itself to begin with, no one knows. Theorists have found many possible ways to embed the inflaton field into string theory, a candidate for the underlying quantum theory of gravity. So far, there’s no evidence for or against these ideas.

Cosmic inflation also has a controversial consequence. The theory — which was pioneered in the 1980s by Alan Guth, Andrei Linde, Aleksei Starobinsky and (of all people) Steinhardt, almost automatically leads to the hypothesis that our universe is a random bubble in an infinite, frothing multiverse sea. Once inflation starts, calculations suggest that it keeps going forever, only stopping in local pockets that then blossom into bubble universes like ours. The possibility of an eternally inflating multiverse suggests that our particular bubble might never be fully understandable on its own terms, since everything that can possibly happen in a multiverse happens infinitely many times. The subject evokes gut-level disagreement among experts. Many have reconciled themselves to the idea that our universe could be just one of many; Steinhardt calls the multiverse “hogwash.”

This sentiment partly motivated his and other researchers’ about-face on bounces. “The bouncing models don’t have a period of inflation,” Turok said. Instead, they add a period of contraction before a Big Bounce to explain our uniform universe. “Just as the gas in the room you’re sitting in is completely uniform because the air molecules are banging around and equilibrating,” he said, “if the universe was quite big and contracting slowly, that gives plenty of time for the universe to smooth itself out.”

Although the first contracting-universe models were convoluted and flawed, many researchers became convinced of the basic idea that slow contraction can explain many features of our expanding universe. “Then the bottleneck became literally the bottleneck — the bounce itself,” Steinhardt said. As Ijjas put it, “The bounce has been the showstopper for these scenarios. People would agree that it’s very interesting if you can do a contraction phase, but not if you can’t get to an expansion phase.”

Bouncing isn’t easy. In the 1960s, the British physicists Roger Penrose and Stephen Hawking proved a set of so-called “singularity theorems” showing that, under very general conditions, contracting matter and energy will unavoidably crunch into an immeasurably dense point called a singularity. These theorems make it hard to imagine how a contracting universe in which space-time, matter and energy are all rushing inward could possibly avoid collapsing all the way down to a singularity — a point where Albert Einstein’s classical theory of gravity and space-time breaks down and the unknown quantum gravity theory rules. Why shouldn’t a contracting universe share the same fate as a massive star, which dies by shrinking to the singular center of a black hole?

Both of the newly proposed bounce models exploit loopholes in the singularity theorems — ones that, for many years, seemed like dead ends. Bounce cosmologists have long recognized that bounces might be possible if the universe contained a substance with negative energy (or other sources of negative pressure), which would counteract gravity and essentially push everything apart. They’ve been trying to exploit this loophole since the early 2000s, but they always found that adding negative-energy ingredients made their models of the universe unstable, because positive- and negative-energy quantum fluctuations could spontaneously arise together, unchecked, out of the zero-energy vacuum of space. In 2016, the Russian cosmologist Valery Rubakov and colleagues even proved a “no-go” theorem that seemed to rule out a huge class of bounce mechanisms on the grounds that they caused these so-called “ghost” instabilities.

Then Ijjas found a bounce mechanism that evades the no-go theorem. The key ingredient in her model is a simple entity called a “scalar field,” which, according to the idea, would have kicked into gear as the universe contracted and energy became highly concentrated. The scalar field would have braided itself into the gravitational field in a way that exerted negative pressure on the universe, reversing the contraction and driving space-time apart —without destabilizing everything. Ijjas’ paper “is essentially the best attempt at getting rid of all possible instabilities and making a really stable model with this special type of matter,” said Jean-Luc Lehners, a theoretical cosmologist at the Max Planck Institute for Gravitational Physics in Germany who has also worked on bounce proposals.

What’s especially interesting about the two new bounce models is that they are “non-singular,” meaning the contracting universe bounces and starts expanding again before ever shrinking to a point. These bounces can therefore be fully described by the classical laws of gravity, requiring no speculations about gravity’s quantum nature.

Graham, Kaplan and Rajendran, of Stanford University, Johns Hopkins University and the University of California, Berkeley, respectively, reported their non-singular bounce idea on the scientific preprint site in September 2017. They found their way to it after wondering whether a previous contraction phase in the history of the universe could be used to explain the value of the cosmological constant — a mystifyingly tiny number that defines the amount of dark energy infused in the space-time fabric, energy that drives the accelerating expansion of the universe.

In working out the hardest part — the bounce — the trio exploited a second, largely forgotten loophole in the singularity theorems. They took inspiration from a characteristically strange model of the universe proposed by the logician Kurt Gödel in 1949, when he and Einstein were walking companions and colleagues at the Institute for Advanced Study in Princeton, New Jersey. Gödel used the laws of general relativity to construct the theory of a rotating universe, whose spinning keeps it from gravitationally collapsing in much the same way that Earth’s orbit prevents it from falling into the sun. Gödel especially liked the fact that his rotating universe permitted “closed timelike curves,” essentially loops in time, which raised all sorts of Gödelian riddles. To his dying day, he eagerly awaited evidence that the universe really is rotating in the manner of his model. Researchers now know it isn’t; otherwise, the cosmos would exhibit alignments and preferred directions. But Graham and company wondered about small, curled-up spatial dimensions that might exist in space, such as the six extra dimensions postulated by string theory. Could a contracting universe spin in those directions?

Imagine there’s just one of these curled-up extra dimensions, a tiny circle found at every point in space. As Graham put it, “At each point in space there’s an extra direction you can go in, a fourth spatial direction, but you can only go a tiny little distance and then you come back to where you started.” If there are at least three extra compact dimensions, then, as the universe contracts, matter and energy can start spinning inside them, and the dimensions themselves will spin with the matter and energy. The vorticity in the extra dimensions can suddenly initiate a bounce. “All that stuff that would have been crunching into a singularity, because it’s spinning in the extra dimensions, it misses — sort of like a gravitational slingshot,” Graham said. “All the stuff should have been coming to a single point, but instead it misses and flies back out again.”

The paper has attracted attention beyond the usual circle of bounce cosmologists. Sean Carroll, a theoretical physicist at the California Institute of Technology, is skeptical but called the idea “very clever.” He said it’s important to develop alternatives to the conventional inflation story, if only to see how much better inflation appears by comparison — especially when next-generation telescopes come online in the early 2020s looking for the telltale swirl pattern in the sky caused by inflation. “Even though I think inflation has a good chance of being right, I wish there were more competitors,” Carroll said. Sundrum, the Maryland physicist, felt similarly. “There are some questions I consider so important that even if you have only a 5 percent chance of succeeding, you should throw everything you have at it and work on them,” he said. “And that’s how I feel about this paper.”

As Graham, Kaplan and Rajendran explore their bounce and its possible experimental signatures, the next step for Ijjas and Steinhardt, working with Frans Pretorius of Princeton, is to develop computer simulations. (Their collaboration is supported by the Simons Foundation, which also funds Quanta Magazine.) Both bounce mechanisms also need to be integrated into more complete, stable cosmological models that would describe the entire evolutionary history of the universe.

Beyond these non-singular bounce solutions, other researchers are speculating about what kind of bounce might occur when a universe contracts all the way to a singularity — a bounce orchestrated by the unknown quantum laws of gravity, which replace the usual understanding of space and time at extremely high energies. In forthcoming work, Turok and collaborators plan to propose a model in which the universe expands symmetrically into the past and future away from a central, singular bounce. Turok contends that the existence of this two-lobed universe is equivalent to the spontaneous creation of electron-positron pairs, which constantly pop in and out of the vacuum. “Richard Feynman pointed out that you can look at the positron as an electron going backwards in time,” he said. “They’re two particles, but they’re really the same; at a certain moment in time they merge and annihilate.” He added, “The idea is a very, very deep one, and most likely the Big Bang will turn out to be similar, where a universe and its anti-universe were drawn out of nothing, if you like, by the presence of matter.”

It remains to be seen whether this universe/anti-universe bounce model can accommodate all observations of the cosmos, but Turok likes how simple it is. Most cosmological models are far too complicated in his view. The universe “looks extremely ordered and symmetrical and simple,” he said. “That’s very exciting for theorists, because it tells us there may be a simple — even if hard-to-discover — theory waiting to be discovered, which might explain the most paradoxical features of the universe.”

America’s declining relevance and China’s gains in the South China Sea

America’s declining relevance and China’s gains in the South China Sea

At a top regional security forum on Saturday, US Defence Secretary Jim Mattis said China’s recent militarisation efforts in the disputed South China Sea were intended to intimidate and coerce regional countries.

Mattis told the Shangri-La Dialogue that China’s actions stood in “stark contrast with the openness of [the US] strategy,” and warned of “much larger consequences” if China continued its current approach.

As an “initial response”, China’s navy has been disinvited by the US from the upcoming 2018 Rim of the Pacific Exercise, the world’s largest international naval exercise.

It is important to understand the context of the current tensions, and the strategic stakes for both China and the US.

In recent years, China has sought to bolster its control over the South China Sea, where a number of claimants have overlapping territorial claims with China, including Vietnam, the Philippines and Taiwan.

China’s efforts have continued unabated, despite rising tensions and international protests. Just recently, China landed a long-range heavy bomber for the first time on an island in the disputed Paracels, and deployed anti-ship and anti-air missile systems  to its outposts in the Spratly Islands.

China’s air force has also stepped up its drills and patrols in the skies over the South China Sea.

While China is not the only claimant militarising the disputed region, no one else comes remotely close to the ambition, scale and speed of China’s efforts.

China’s strategy

The South China Sea has long been coveted by China (and others) due to its strategic importance for trade and military power, as well as its abundant resources. According to one estimate, US$3.4 trillion in trade passed through the South China Sea in 2016, representing 21% of the global total.

China’s goal in the South China Sea can be summarised with one word: control.

In order to achieve this, China is undertaking a coordinated, long-term effort to assert its dominance in the region, including the building of artificial islands, civil and military infrastructure, and the deployment of military ships and aircraft to the region.

While politicians of other countries such as the US, Philippines and Australia espouse fiery rhetoric to protest China’s actions, Beijing is focusing on actively transforming the physical and power geography of the South China Sea.

In fact, according to the new commander of the US Indo-Pacific Command, Admiral Philip Davidson, China’s efforts have been so successful that it “is now capable of controlling the South China Sea in all scenarios short of war with the US”.

America’s declining relevance

China’s efforts are hard to counter because it has employed an incremental approach to cementing its control in the South China Sea. None of its actions would individually justify a US military response that could escalate to war. In any case, the human and economic cost of such a conflict would be immense.

The inability of the US to respond effectively to China’s moves has eroded its credibility in the region. It has also fed a narrative that the US is not “here to stay” in Asia. If the US is serious about countering China, then Mattis’ rhetoric must be followed by action.

First, the US should clearly articulate its red lines to China and others on the kinds of activities that are unacceptable in the South China Sea. Then it must be willing to enforce such red lines, while being mindful of the risks.

Second, the US needs to renew its efforts to cooperate with allies in the region to build capacity and demonstrate a coordinated commitment to stand in the face of China’s challenge.

Third, the US needs to deploy military capabilities in the Indo-Pacific region, such as advanced missile systems, which would reduce the military advantages gained by China through the militarisation of the South China Sea features.

Long-term consequences

China’s tightening control over the South China Sea is worrying for a number of regional countries. For many, the shipping routes that run through the South China Sea are the bloodlines of their economies.

Moreover, the shifting balance of power will enable Beijing to settle its territorial disputes in the region for good. Without a doubt, China is willing to use its new-found power to change the status quo in its favour, even at the expense of its weaker neighbours.

Control of the South China Sea also allows Beijing to better project its military power across South-East Asia, the western Pacific and parts of Oceania. This would make it more costly for the US and its allies to take action against China, for example, in scenarios involving Taiwan.

On a higher level, China’s assertive approach to the South China Sea demonstrates Beijing’s increasing confidence and its willingness to flaunt international norms that it considers inconvenient or contrary to its interests.

There is little doubt China is becoming the new dominant power in Asia. Its rise has benefited millions in the region and should be welcomed. But we should also be wary of Beijing’s approach to territorial disputes and grievances if it employs military and economic intimidation and coercion.

If we want to live in a “world where big fish neither eat nor intimidate the small”, then there must be consequences for countries, including China, when they flaunt international norms and seek to settle disagreements with force.

It may be too late to turn the tide in the South China Sea and reverse China’s gains. No one would run such a risk. But it is not too late to impose penalties on China for further destabilising the region through its actions in the South China Sea.

The challenge is to figure out how to do that, and what we would be willing to risk to achieve it.

A Short Guide to Hard Problems

How fundamentally difficult is a problem? That’s the basic task of computer scientists who hope to sort problems into what are called complexity classes. These are groups that contain all the computational problems that require less than some fixed amount of a computational resource — something like time or memory. Take a toy example featuring a large number such as 123,456,789,001. One might ask: Is this number prime, divisible only by 1 and itself? Computer scientists can solve this using fast algorithms — algorithms that don’t bog down as the number gets arbitrarily large. In our case, 123,456,789,001 is not a prime number. Then we might ask: What are its prime factors? Here no such fast algorithm exists — not unless you use a quantum computer. Therefore computer scientists believe that the two problems are in different complexity classes.

Many different complexity classes exist, though in most cases researchers haven’t been able to prove one class is categorically distinct from the others. Proving those types of categorical distinctions is among the hardest and most important open problems in the field. That’s why the new result I wrote about last month in Quanta was considered such a big deal: In a paper published at the end of May, two computer scientists proved (with a caveat) that the two complexity classes that represent quantum and classical computers really are different.

The differences between complexity classes can be subtle or stark, and keeping the classes straight is a challenge. For that reason, Quanta has put together this primer on seven of the most fundamental complexity classes. May you never confuse BPP and BQP again.


Stands for: Polynomial time

Short version: All the problems that are easy for a classical (meaning nonquantum) computer to solve.

Precise version: Algorithms in P must stop and give the right answer in at most ntime where is the length of the input and is some constant.

Typical problems:
• Is a number prime?
• What’s the shortest path between two points?

What researchers want to know: Is P the same thing as NP? If so, it would upend computer science and render most cryptography ineffective overnight. (Almost no one thinks this is the case.)


Stands for: Nondeterministic Polynomial time

Short version: All problems that can be quickly verified by a classical computer once a solution is given.

Precise version: A problem is in NP if, given a “yes” answer, there is a short proof that establishes the answer is correct. If the input is a string, X, and you need to decide if the answer is “yes,” then a short proof would be another string, Y, that can be used to verify in polynomial time that the answer is indeed “yes.” (Y is sometimes referred to as a “short witness” — all problems in NP have “short witnesses” that allow them to be verified quickly.)

Typical problems:
• The clique problem. Imagine a graph with edges and nodes — for example, a graph where nodes are individuals on Facebook and two nodes are connected by an edge if they’re “friends.” A clique is a subset of this graph where all the people are friends with all the others. One might ask of such a graph: Is there a clique of 20 people? 50 people? 100? Finding such a clique is an “NP-complete” problem, meaning that it has the highest complexity of any problem in NP. But if given a potential answer — a subset of 50 nodes that may or may not form a clique — it’s easy to check.
• The traveling salesman problem. Given a list of cities with distances between each pair of cities, is there a way to travel through all the cities in less than a certain number of miles? For example, can a traveling salesman pass through every U.S. state capital in less than 11,000 miles?

What researchers want to know: Does P = NP? Computer scientists are nowhere near a solution to this problem.


Stands for: Polynomial Hierarchy

Short version: PH is a generalization of NP — it contains all the problems you get if you start with a problem in NP and add additional layers of complexity.

Precise version: PH contains problems with some number of alternating “quantifiers” that make the problems more complex. Here’s an example of a problem with alternating quantifiers: Given X, does there exist Y such that for every Z there exists W such that Rhappens? The more quantifiers a problem contains, the more complex it is and the higher up it is in the polynomial hierarchy.

Typical problem:
• Determine if there exists a clique of size 50 but no clique of size 51.

What researchers want to know: Computer scientists have not been able to prove that PH is different from P. This problem is equivalent to the P versus NP problem because if (unexpectedly) P = NP, then all of PH collapses to P (that is, P = PH).


Stands for: Polynomial Space

Short version: PSPACE contains all the problems that can be solved with a reasonable amount of memory.

Precise version: In PSPACE you don’t care about time, you care only about the amount of memory required to run an algorithm. Computer scientists have proven that PSPACE contains PH, which contains NP, which contains P.

Typical problem:
• Every problem in P, NP and PH is in PSPACE.

What researchers want to know: Is PSPACE different from P?


Stands for: Bounded-error Quantum Polynomial time

Short version: All problems that are easy for a quantum computer to solve.

Precise version: All problems that can be solved in polynomial time by a quantum computer.

Typical problems:
• Identify the prime factors of an integer.

What researchers want to know: Computer scientists have proven that BQP is contained in PSPACE and that BQP contains P. They don’t know whether BQP is in NP, but they believe the two classes are incomparable: There are problems that are in NP and not BQP and vice versa.


Stands for: Exponential Time

Short version: All the problems that can be solved in an exponential amount of time by a classical computer.

Precise version: EXP contains all the previous classes — P, NP, PH, PSPACE and BQP. Researchers have proven that it’s different from P — they have found problems in EXP that are not in P.

Typical problem:
• Generalizations of games like chess and checkers are in EXP. If a chess board can be any size, it becomes a problem in EXP to determine which player has the advantage in a given board position.

What researchers want to know: Computer scientists would like to be able to prove that PSPACE does not contain EXP. They believe there are problems that are in EXP that are not in PSPACE, because sometimes in EXP you need a lot of memory to solve the problems. Computer scientists know how to separate EXP and P.


Short version: Problems that can be quickly solved by algorithms that include an element of randomness.

Precise version: BPP is exactly the same as P, but with the difference that the algorithm is allowed to include steps where its decision-making is randomized. Algorithms in BPP are required only to give the right answer with a probability close to 1.

Typical problem:
• You’re handed two different formulas that each produce a polynomial that has many variables. Do the formulas compute the exact same polynomial? This is called the polynomial identity testing problem.

What researchers want to know: Computer scientists would like to know whether BPP = P. If that is true, it would mean that every randomized algorithm can be de-randomized. They believe this is the case — that there is an efficient deterministic algorithm for every problem for which there exists an efficient randomized algorithm — but they have not been able to prove it.

Our Time Has Come - Alyssa Ayres

Our Time Has Come – Alyssa Ayres

How India is Making Its Place in the World

Over the last 25 years, India’s explosive economic growth has vaulted it into the ranks of the world’s emerging major powers. Long plagued by endemic poverty, India was hamstrung by a burdensome regulatory regime that limited its ability to compete on a global scale until the 1990s. Since then, however, the Indian government has gradually opened up the economy, and the results have been stunning. India’s middle class has grown by leaps and bounds, and the country’s sheer scale—its huge population and $2 trillion economy—means its actions will have a major global impact. From world trade to climate change to democratization, India now matters.

While it is clearly on the path to becoming a great power, India has not abandoned all of its past policies: its economy remains relatively protectionist, and it still struggles with the legacy of its longstanding foreign policy doctrine of nonalignment. India’s vibrant democracy encompasses a vast array of parties who champion dizzyingly disparate policies. And India is not easily swayed by foreign influence; the country carefully guards its autonomy, in part because of its colonial past. For all of these reasons, India tends to move cautiously and deliberately in the international sphere.

In Our Time Has Come, Senior Fellow for India, Pakistan, and South Asia Alyssa Ayres looks at how the tension between India’s inward-focused past and its ongoing integration into the global economy will shape the country’s trajectory. Today, Indian leaders increasingly want to see their country in the ranks of the world’s great powers—in fact, as a “leading power,” to use the words of Prime Minister Narendra Modi. Ayres considers the role India is likely to play as its prominence grows, taking stock of the implications and opportunities for the United States and other nations as the world’s largest democracy defines its place in the world. As Ayres shows, India breaks the mold of the typical ally, and its vastness, history, and diversity render it incomparable to any other major democratic power. By focusing on how India’s unique perspective shapes its approach to global affairs, Our Time Has Come will help the world make sense of India’s rise.

Despite the various challenges author Alyssa Ayres has highlighted in her book, India is well on the road to acquiring global power and status.

The title of Alyssa Ayres’ latest book on India, ‘Our Time Has Come’, would apply more appropriately to a resurgent China, supremely confident in its arrival as a great power, rather than to India. One may legitimately argue that India’s time has not come yet although we may be getting there. The sub-title is perhaps more apt—‘How India is making its place in the world’.

Alyssa Ayres is a sympathetic chronicler of India’s rise to global prominence over the past quarter of a century. She covers this trajectory in eight chapters, grouped under three parts, one, titled, ‘Looking Back’; the next covering the ‘Transition’ and the third, ‘Looking Ahead’. The epilogue has some recommendations for US policy towards India, on how the partnership can be strengthened even as India seeks to carve out a place for itself in a new international order. For a balanced and carefully researched analysis of India’s prospects, as seen from Washington, this is a book which will rank pretty high in the years to come.

Alyssa did spend some time with me while gathering material for her book and we had a long conversation about the templates from the past, which continue to influence the way India looks at the world. She has duly reflected this in the first chapter. But her book is mainly about departures from the past and how a new India is defining her place in the world even as its historical experience lends its calculations a degree of caution.

Quite predictably, she considers the end of the Cold War as marking the end of an era when India had to cope with a greatly transformed geopolitical landscape even as it grappled with an economic crisis which threatened to push the country into a humiliating financial default. India’s relations with the US and West in general began to improve. The Cold War prism through which India was seen as being on the other side of the fence dissipated. The economic crisis compelled the adoption of far-reaching market based reforms and economic liberalisation, soon putting India on a high growth trajectory.

This reinforced the turn towards the West which could support India’s economic prospects with infusions of capital and technology. Along with the globalisation of the Indian economy and the opportunities this offered to foreign capital, India began to move from the margins towards the centre of the global economy. Its regional and global profile also began to rise.

Alyssa credits the Modi government with having given a new impetus to the transformation of India’s engagement with the world. India is a country more demanding of its due in the world. It is less hesitant in asserting its interests. This trend, she believes, is likely to grow stronger and both friends and adversaries need to acknowledge the change that is taking place.

In spelling out these changes, Alyssa quotes Foreign Secretary Jaishankar who argues that India today seeks to be a “leading power” rather than a “balancing power”, ready to shape events rather than be shaped by them. This, then, is an India which would be less reactive and defensive and would be ready to play a leading role on the world stage.

To be fair, the author acknowledges that previous political dispensations, both led by the Congress and the BJP, have presided over very real and significant changes that have taken place in the conduct of India’s foreign relations. However, the lingering legacies of the past, the defensiveness inherent in the concept of non-alignment, the deeply ingrained suspicions of foreign capital and the widespread political preference for self-reliance have all held India back from taking on a mantle of leadership on the global stage. However, she does credit Modi for having moved away from these constraining legacies more than any other leader so far.

Alyssa devotes a considerable part of her book to the Indian economic story. After all India’s place in the world is integrally linked to the country’s economic prospects. Here she finds that the country’s historical experience and its complex polity and society have prevented the whole-hearted embrace of economic reforms and this detracts from the expansive role that it aspires to on the world stage.

She has rightly pointed out that Prime Minister Modi is the first Indian leader to declare his support for reforms explicitly and unreservedly. There is no “reform through stealth” for him. He has also been open in his welcome of foreign investment and has been persuasive in his sales pitch during visits across the world. And yet he has not been successful in bringing about the long-pending reforms in land acquisition and labour laws or in overhauling India’s public sector.

India’s negotiating position in multilateral trade negotiations continues to be marked by defensiveness despite Prime Minister Modi’s penchant for deal making. Structural reforms, particularly, in mechanisms of governance, have made little headway and all this means that there is a mismatch between political ambition and capacity.

In the case of the US, there is an imbalance between a very robust security and defence relationship and still modest economic and commercial relationship. The two countries continue to spar with each other in multilateral trade fora and these difference are likely to sharpen under the Trump administration.

Alyssa concludes her book on an optimistic and forward looking note. Despite the various challenges she has drawn attention to, she sees India well on the road to acquiring global power and status. The country will be, by 2040, the third largest economy in the world after the US and China. It would have a formidable military, in particular, naval power and, if it plays its cards well, it could well begin to emerge as a leading manufacturing power, leveraging its demographic dividend into even more substantial national power.

The author has good advice for US policy makers who will need to accept that India will play according to its own template rather than accept a Washington template. There will be an insistence on a relationship as equals but it is a relationship which will be as important for India as it will be for the US.

One cannot quarrel with that.

Genetic Engineering to Clash With Evolution

Genetic Engineering to Clash With Evolution

In a crowded auditorium at New York’s Cold Spring Harbor Laboratory in August, Philipp Messer, a population geneticist at Cornell University, took the stage to discuss a powerful and controversial new application for genetic engineering: gene drives.

Gene drives can force a trait through a population, defying the usual rules of inheritance. A specific trait ordinarily has a 50-50 chance of being passed along to the next generation. A gene drive could push that rate to nearly 100 percent. The genetic dominance would then continue in all future generations. You want all the fruit flies in your lab to have light eyes? Engineer a drive for eye color, and soon enough, the fruit flies’ offspring will have light eyes, as will their offspring, and so on for all future generations. Gene drives may work in any species that reproduces sexually, and they have the potential to revolutionize disease control, agriculture, conservation and more. Scientists might be able to stop mosquitoes from spreading malaria, for example, or eradicate an invasive species.

The technology represents the first time in history that humans have the ability to engineer the genes of a wild population. As such, it raises intense ethical and practical concerns, not only from critics but from the very scientists who are working with it.

Messer’s presentation highlighted a potential snag for plans to engineer wild ecosystems: Nature usually finds a way around our meddling. Pathogens evolve antibiotic resistance; insects and weeds evolve to thwart pesticides. Mosquitoes and invasive species reprogrammed with gene drives can be expected to adapt as well, especially if the gene drive is harmful to the organism — it’ll try to survive by breaking the drive.

“In the long run, even with a gene drive, evolution wins in the end,” said Kevin Esvelt, an evolutionary engineer at the Massachusetts Institute of Technology. “On an evolutionary timescale, nothing we do matters. Except, of course, extinction. Evolution doesn’t come back from that one.”

Gene drives are a young technology, and none have been released into the wild. A handful of laboratory studies show that gene drives work in practice — in fruit flies, mosquitoes and yeast. Most of these experiments have found that the organisms begin to develop evolutionary resistance that should hinder the gene drives. But these proof-of-concept studies follow small populations of organisms. Large populations with more genetic diversity — like the millions of swarms of insects in the wild — pose the most opportunities for resistance to emerge.

It’s impossible — and unethical — to test a gene drive in a vast wild population to sort out the kinks. Once a gene drive has been released, there may be no way to take it back. (Some researchers have suggested the possibility of releasing a second gene drive to shut down a rogue one. But that approach is hypothetical, and even if it worked, the ecological damage done in the meantime would remain unchanged.)

The next best option is to build models to approximate how wild populations might respond to the introduction of a gene drive. Messer and other researchers are doing just that. “For us, it was clear that there was this discrepancy — a lot of geneticists have done a great job at trying to build these systems, but they were not concerned that much with what is happening on a population level,” Messer said. Instead, he wants to learn “what will happen on the population level, if you set these things free and they can evolve for many generations — that’s where resistance comes into play.”

At the meeting at Cold Spring Harbor Laboratory, Messer discussed a computer model his team developed, which they described in a paper posted in June on the scientific preprint site The work is one of three theoretical papers on gene drive resistance submitted to in the last five months — the others are from a researcher at the University of Texas, Austin, and a joint team from Harvard University and MIT. (The authors are all working to publish their research through traditional peer-reviewed journals.) According to Messer, his model suggests “resistance will evolve almost inevitably in standard gene drive systems.”

It’s still unclear where all this interplay between resistance and gene drives will end up. It could be that resistance will render the gene drive impotent. On the one hand, this may mean that releasing the drive was a pointless exercise; on the other hand, some researchers argue, resistance could be an important natural safety feature. Evolution is unpredictable by its very nature, but a handful of biologists are using mathematical models and careful lab experiments to try to understand how this powerful genetic tool will behave when it’s set loose in the wild.

Resistance Isn’t Futile

Gene drives aren’t exclusively a human technology. They occasionally appear in nature. Researchers first thought of harnessing the natural versions of gene drives decades ago, proposing to re-create them with “crude means, like radiation” or chemicals, said Anna Buchman, a postdoctoral researcher in molecular biology at the University of California, Riverside. These genetic oddities, she adds, “could be manipulated to spread genes through a population or suppress a population.”

In 2003, Austin Burt, an evolutionary geneticist at Imperial College London, proposed a more finely tuned approach called a homing endonuclease gene drive, which would zero in on a specific section of DNA and alter it.

Burt mentioned the potential problem of resistance — and suggested some solutions — both in his seminal paper and in subsequent work. But for years, it was difficult to engineer a drive in the lab, because the available technology was cumbersome.

With the advent of genetic engineering, Burt’s idea became reality. In 2012, scientists unveiled CRISPR, a gene-editing tool that has been described as a molecular word processor. It has given scientists the power to alter genetic information in every organism they have tried it on. CRISPR locates a specific bit of genetic code and then breaks both strands of the DNA at that site, allowing genes to be deleted, added or replaced.

CRISPR provides a relatively easy way to release a gene drive. First, researchers insert a CRISPR-powered gene drive into an organism. When the organism mates, its CRISPR-equipped chromosome cleaves the matching chromosome coming from the other parent. The offspring’s genetic machinery then attempts to sew up this cut. When it does, it copies over the relevant section of DNA from the first parent — the section that contains the CRISPR gene drive. In this way, the gene drive duplicates itself so that it ends up on both chromosomes, and this will occur with nearly every one of the original organism’s offspring.

Just three years after CRISPR’s unveiling, scientists at the University of California, San Diego, used CRISPR to insert inheritable gene drives into the DNA of fruit flies, thus building the system Burt had proposed. Now scientists can order the essential biological tools on the internet and build a working gene drive in mere weeks. “Anyone with some genetics knowledge and a few hundred dollars can do it,” Messer said. “That makes it even more important that we really study this technology.”

Although there are many different ways gene drives could work in practice, two approaches have garnered the most attention: replacement and suppression. A replacement gene drive alters a specific trait. For example, an anti-malaria gene drive might change a mosquito’s genome so that the insect no longer had the ability to pick up the malaria parasite. In this situation, the new genes would quickly spread through a wild population so that none of the mosquitoes could carry the parasite, effectively stopping the spread of the disease.

A suppression gene drive would wipe out an entire population. For example, a gene drive that forced all offspring to be male would make reproduction impossible.

But wild populations may resist gene drives in unpredictable ways. “We know from past experiences that mosquitoes, especially the malaria mosquitoes, have such peculiar biology and behavior,” said Flaminia Catteruccia, a molecular entomologist at the Harvard T.H. Chan School of Public Health. “Those mosquitoes are much more resilient than we make them. And engineering them will prove more difficult than we think.” In fact, such unpredictability could likely be found in any species.

The three new papers use different models to try to understand this unpredictability, at least at its simplest level.

The Cornell group used a basic mathematical model to map how evolutionary resistance will emerge in a replacement gene drive. It focuses on how DNA heals itself after CRISPR breaks it (the gene drive pushes a CRISPR construct into each new organism, so it can cut, copy and paste itself again). The DNA repairs itself automatically after a break. Exactly how it does so is determined by chance. One option is called nonhomologous end joining, in which the two ends that were broken get stitched back together in a random way. The result is similar to what you would get if you took a sentence, deleted a phrase, and then replaced it with an arbitrary set of words from the dictionary — you might still have a sentence, but it probably wouldn’t make sense. The second option is homology-directed repair, which uses a genetic template to heal the broken DNA. This is like deleting a phrase from a sentence, but then copying a known phrase as a replacement — one that you know will fit the context.

Nonhomologous end joining is a recipe for resistance. Because the CRISPR system is designed to locate a specific stretch of DNA, it won’t recognize a section that has the equivalent of a nonsensical word in the middle. The gene drive won’t get into the DNA, and it won’t get passed on to the next generation. With homology-directed repair, the template could include the gene drive, ensuring that it would carry on.

The Cornell model tested both scenarios. “What we found was it really is dependent on two things: the nonhomologous end-joining rate and the population size,” said Robert Unckless, an evolutionary geneticist at the University of Kansas who co-authored the paper as a postdoctoral researcher at Cornell. “If you can’t get nonhomologous end joining under control, resistance is inevitable. But resistance could take a while to spread, which means you might be able to achieve whatever goal you want to achieve.” For example, if the goal is to create a bubble of disease-proof mosquitoes around a city, the gene drive might do its job before resistance sets in.

The team from Harvard and MIT also looked at nonhomologous end joining, but they took it a step further by suggesting a way around it: by designing a gene drive that targets multiple sites in the same gene. “If any of them cut at their sites, then it’ll be fine — the gene drive will copy,” said Charleston Noble, a doctoral student at Harvard and the first author of the paper. “You have a lot of chances for it to work.”

The gene drive could also target an essential gene, Noble said — one that the organism can’t afford to lose. The organism may want to kick out the gene drive, but not at the cost of altering a gene that’s essential to life.

The third paper, from the UT Austin team, took a different approach. It looked at how resistance could emerge at the population level through behavior, rather than within the target sequence of DNA. The target population could simply stop breeding with the engineered individuals, for example, thus stopping the gene drive.

“The math works out that if a population is inbred, at least to some degree, the gene drive isn’t going to work out as well as in a random population,” said James Bull, the author of the paper and an evolutionary biologist at Austin. “It’s not just sequence evolution. There could be all kinds of things going on here, by which populations block [gene drives],” Bull added. “I suspect this is the tip of the iceberg.”

Resistance is constrained only by the limits of evolutionary creativity. It could emerge from any spot along the target organism’s genome. And it extends to the surrounding environment as well. For example, if a mosquito is engineered to withstand malaria, the parasite itself may grow resistant and mutate into a newly infectious form, Noble said.

Not a Bug, but a Feature?

If the point of a gene drive is to push a desired trait through a population, then resistance would seem to be a bad thing. If a drive stops working before an entire population of mosquitoes is malaria-proof, for example, then the disease will still spread. But at the Cold Spring Harbor Laboratory meeting, Messer suggested the opposite: “Let’s embrace resistance. It could provide a valuable safety control mechanism.” It’s possible that the drive could move just far enough to stop a disease in a particular region, but then stop before it spread to all of the mosquitoes worldwide, carrying with it an unknowable probability of unforeseen environmental ruin.

Not everyone is convinced that this optimistic view is warranted. “It’s a false security,” said Ethan Bier, a geneticist at the University of California, San Diego. He said that while such a strategy is important to study, he worries that researchers will be fooled into thinking that forms of resistance offer “more of a buffer and safety net than they do.”

And while mathematical models are helpful, researchers stress that models can’t replace actual experimentation. Ecological systems are just too complicated. “We have no experience engineering systems that are going to evolve outside of our control. We have never done that before,” Esvelt said. “So that’s why a lot of these modeling studies are important — they can give us a handle on what might happen. But I’m also hesitant to rely on modeling and trying to predict in advance when systems are so complicated.”

Messer hopes to put his theoretical work into a real-world setting, at least in the lab. He is currently directing a gene drive experiment at Cornell that tracks multiple cages of around 5,000 fruit flies each — more animals than past studies have used to research gene drive resistance. The gene drive is designed to distribute a fluorescent protein through the population. The proteins will glow red under a special light, a visual cue showing how far the drive gets before resistance weeds it out.

Others are also working on resistance experiments: Esvelt and Catteruccia, for example, are working with George Church, a geneticist at Harvard Medical School, to develop a gene drive in mosquitoes that they say will be immune to resistance. They plan to insert multiple drives in the same gene — the strategy suggested by the Harvard/MIT paper.

Such experiments will likely guide the next generation of computer models, to help tailor them more precisely to a large wild population.

“I think it’s been interesting because there is this sort of going back and forth between theory and empirical work,” Unckless said. “We’re still in the early days of it, but hopefully it’ll be worthwhile for both sides, and we’ll make some informed and ethically correct decisions about what to do.”

Finding links between the Standard Model of particle physics and the octonions

Finding links between the Standard Model of particle physics and the octonions

In 2014, a graduate student at the University of Waterloo, Canada, named Cohl Furey rented a car and drove six hours south to Pennsylvania State University, eager to talk to a physics professor there named Murat Günaydin. Furey had figured out how to build on a finding of Günaydin’s from 40 years earlier — a largely forgotten result that supported a powerful suspicion about fundamental physics and its relationship to pure math.

The suspicion, harbored by many physicists and mathematicians over the decades but rarely actively pursued, is that the peculiar panoply of forces and particles that comprise reality spring logically from the properties of eight-dimensional numbers called “octonions.”

As numbers go, the familiar real numbers — those found on the number line, like 1, π and -83.777 — just get things started. Real numbers can be paired up in a particular way to form “complex numbers,” first studied in 16th-century Italy, that behave like coordinates on a 2-D plane. Adding, subtracting, multiplying and dividing is like translating and rotating positions around the plane. Complex numbers, suitably paired, form 4-D “quaternions,” discovered in 1843 by the Irish mathematician William Rowan Hamilton, who on the spot ecstatically chiseled the formula into Dublin’s Broome Bridge. John Graves, a lawyer friend of Hamilton’s, subsequently showed that pairs of quaternions make octonions: numbers that define coordinates in an abstract 8-D space.

There the game stops. Proof surfaced in 1898 that the reals, complex numbers, quaternions and octonions are the only kinds of numbers that can be added, subtracted, multiplied and divided. The first three of these “division algebras” would soon lay the mathematical foundation for 20th-century physics, with real numbers appearing ubiquitously, complex numbers providing the math of quantum mechanics, and quaternions underlying Albert Einstein’s special theory of relativity. This has led many researchers to wonder about the last and least-understood division algebra. Might the octonions hold secrets of the universe?

“Octonions are to physics what the Sirens were to Ulysses,” Pierre Ramond, a particle physicist and string theorist at the University of Florida, said in an email.

Günaydin, the Penn State professor, was a graduate student at Yale in 1973 when he and his advisor Feza Gürsey found a surprising link between the octonions and the strong force, which binds quarks together inside atomic nuclei. An initial flurry of interest in the finding didn’t last. Everyone at the time was puzzling over the Standard Model of particle physics — the set of equations describing the known elementary particles and their interactions via the strong, weak and electromagnetic forces (all the fundamental forces except gravity). But rather than seek mathematical answers to the Standard Model’s mysteries, most physicists placed their hopes in high-energy particle colliders and other experiments, expecting additional particles to show up and lead the way beyond the Standard Model to a deeper description of reality. They “imagined that the next bit of progress will come from some new pieces being dropped onto the table, [rather than] from thinking harder about the pieces we already have,” said Latham Boyle, a theoretical physicist at the Perimeter Institute of Theoretical Physics in Waterloo, Canada.

Decades on, no particles beyond those of the Standard Model have been found. Meanwhile, the strange beauty of the octonions has continued to attract the occasional independent-minded researcher, including Furey, the Canadian grad student who visited Günaydin four years ago. Looking like an interplanetary traveler, with choppy silver bangs that taper to a point between piercing blue eyes, Furey scrawled esoteric symbols on a blackboard, trying to explain to Günaydin that she had extended his and Gürsey’s work by constructing an octonionic model of both the strong and electromagnetic forces.

“Communicating the details to him turned out to be a bit more of a challenge than I had anticipated, as I struggled to get a word in edgewise,” Furey recalled. Günaydin had continued to study the octonions since the ’70s by way of their deep connections to string theory, M-theory and supergravity — related theories that attempt to unify gravity with the other fundamental forces. But his octonionic pursuits had always been outside the mainstream. He advised Furey to find another research project for her Ph.D., since the octonions might close doors for her, as he felt they had for him.

Cohl Furey, a researcher at the Department of Applied Mathematics and Theoretical Physics high energy physics at the University of Cambridge, pictured working on her laptop and yoga mat alongside a sculpture at the Wynchfield site gardens of Trinity Hall college, Cambridge, United Kingdom, July 10th 2018. Cohl is a member of the Department of Applied Mathematics and Theoretical Physics high energy physics research group.
Photo credit: Susannah Ireland

But Furey didn’t — couldn’t — give up. Driven by a profound intuition that the octonions and other division algebras underlie nature’s laws, she told a colleague that if she didn’t find work in academia she planned to take her accordion to New Orleans and busk on the streets to support her physics habit. Instead, Furey landed a postdoc at the University of Cambridge in the United Kingdom. She has since produced a number of results connecting the octonions to the Standard Model that experts are calling intriguing, curious, elegant and novel. “She has taken significant steps toward solving some really deep physical puzzles,” said Shadi Tahvildar-Zadeh, a mathematical physicist at Rutgers University who recently visited Furey in Cambridge after watching an online series of lecture videos she made about her work.

Furey has yet to construct a simple octonionic model of all Standard Model particles and forces in one go, and she hasn’t touched on gravity. She stresses that the mathematical possibilities are many, and experts say it’s too soon to tell which way of amalgamating the octonions and other division algebras (if any) will lead to success.

“She has found some intriguing links,” said Michael Duff, a pioneering string theorist and professor at Imperial College London who has studied octonions’ role in string theory. “It’s certainly worth pursuing, in my view. Whether it will ultimately be the way the Standard Model is described, it’s hard to say. If it were, it would qualify for all the superlatives — revolutionary, and so on.”

Peculiar Numbers

I met Furey in June, in the porter’s lodge through which one enters Trinity Hall on the bank of the River Cam. Petite, muscular, and wearing a sleeveless black T-shirt (that revealed bruises from mixed martial arts), rolled-up jeans, socks with cartoon aliens on them and Vegetarian Shoes–brand sneakers, in person she was more Vancouverite than the otherworldly figure in her lecture videos. We ambled around the college lawns, ducking through medieval doorways in and out of the hot sun. On a different day I might have seen her doing physics on a purple yoga mat on the grass.

Furey, who is 39, said she was first drawn to physics at a specific moment in high school, in British Columbia. Her teacher told the class that only four fundamental forces underlie all the world’s complexity — and, furthermore, that physicists since the 1970s had been trying to unify all of them within a single theoretical structure. “That was just the most beautiful thing I ever heard,” she told me, steely-eyed. She had a similar feeling a few years later, as an undergraduate at Simon Fraser University in Vancouver, upon learning about the four division algebras. One such number system, or infinitely many, would seem reasonable. “But four?” she recalls thinking. “How peculiar.”

Video: Cohl Furey explains what octonions are and what they might have to do with particle physics.

After breaks from school spent ski-bumming, bartending abroad and intensely training as a mixed martial artist, Furey later met the division algebras again in an advanced geometry course and learned just how peculiar they become in four strokes. When you double the dimensions with each step as you go from real numbers to complex numbers to quaternions to octonions, she explained, “in every step you lose a property.” Real numbers can be ordered from smallest to largest, for instance, “whereas in the complex plane there’s no such concept.” Next, quaternions lose commutativity; for them, a × b doesn’t equal b × a. This makes sense, since multiplying higher-dimensional numbers involves rotation, and when you switch the order of rotations in more than two dimensions you end up in a different place. Much more bizarrely, the octonions are nonassociative, meaning (a × b) × c doesn’t equal a × (b × c). “Nonassociative things are strongly disliked by mathematicians,” said John Baez, a mathematical physicist at the University of California, Riverside, and a leading expert on the octonions. “Because while it’s very easy to imagine noncommutative situations — putting on shoes then socks is different from socks then shoes — it’s very difficult to think of a nonassociative situation.” If, instead of putting on socks then shoes, you first put your socks into your shoes, technically you should still then be able to put your feet into both and get the same result. “The parentheses feel artificial.”

The octonions’ seemingly unphysical nonassociativity has crippled many physicists’ efforts to exploit them, but Baez explained that their peculiar math has also always been their chief allure. Nature, with its four forces batting around a few dozen particles and anti-particles, is itself peculiar. The Standard Model is “quirky and idiosyncratic,” he said.

In the Standard Model, elementary particles are manifestations of three “symmetry groups” — essentially, ways of interchanging subsets of the particles that leave the equations unchanged. These three symmetry groups, SU(3), SU(2) and U(1), correspond to the strong, weak and electromagnetic forces, respectively, and they “act” on six types of quarks, two types of leptons, plus their anti-particles, with each type of particle coming in three copies, or “generations,” that are identical except for their masses. (The fourth fundamental force, gravity, is described separately, and incompatibly, by Einstein’s general theory of relativity, which casts it as curves in the geometry of space-time.)

Sets of particles manifest the symmetries of the Standard Model in the same way that four corners of a square must exist in order to realize a symmetry of 90-degree rotations. The question is, why this symmetry group — SU(3) × SU(2) × U(1)? And why this particular particle representation, with the observed particles’ funny assortment of charges, curious handedness and three-generation redundancy? The conventional attitude toward such questions has been to treat the Standard Model as a broken piece of some more complete theoretical structure. But a competing tendency is to try to use the octonions and “get the weirdness from the laws of logic somehow,” Baez said.

Furey began seriously pursuing this possibility in grad school, when she learned that quaternions capture the way particles translate and rotate in 4-D space-time. She wondered about particles’ internal properties, like their charge. “I realized that the eight degrees of freedom of the octonions could correspond to one generation of particles: one neutrino, one electron, three up quarks and three down quarks,” she said — a bit of numerology that had raised eyebrows before. The coincidences have since proliferated. “If this research project were a murder mystery,” she said, “I would say that we are still in the process of collecting clues.”

The Dixon Algebra

To reconstruct particle physics, Furey uses the product of the four division algebras, 𝕆 ( for reals, for complex numbers, for quaternions and 𝕆 for octonions) — sometimes called the Dixon algebra, after Geoffrey Dixon, a physicist who first took this tack in the 1970s and ’80s before failing to get a faculty job and leaving the field. (Dixon forwarded me a passage from his memoirs: “What I had was an out-of-control intuition that these algebras were key to understanding particle physics, and I was willing to follow this intuition off a cliff if need be. Some might say I did.”)

Whereas Dixon and others proceeded by mixing the division algebras with extra mathematical machinery, Furey restricts herself; in her scheme, the algebras “act on themselves.” Combined as 𝕆, the four number systems form a 64-dimensional abstract space. Within this space, in Furey’s model, particles are mathematical “ideals”: elements of a subspace that, when multiplied by other elements, stay in that subspace, allowing particles to stay particles even as they move, rotate, interact and transform. The idea is that these mathematical ideals are the particles of nature, and they manifest the symmetries of 𝕆.

As Dixon knew, the algebra splits cleanly into two parts: and 𝕆, the products of complex numbers with quaternions and octonions, respectively (real numbers are trivial). In Furey’s model, the symmetries associated with how particles move and rotate in space-time, together known as the Lorentz group, arise from the quaternionic part of the algebra.  The symmetry group SU(3) × SU(2) × U(1), associated with particles’ internal properties and mutual interactions via the strong, weak and electromagnetic forces, come from the octonionic part, 𝕆.

Günaydin and Gürsey, in their early work, already found SU(3) inside the octonions. Consider the base set of octonions, 1, e1, e2, e3, e4, e5, e6and e7, which are unit distances in eight different orthogonal directions: They respect a group of symmetries called G2, which happens to be one of the rare “exceptional groups” that can’t be mathematically classified into other existing symmetry-group families. The octonions’ intimate connection to all the exceptional groups and other special mathematical objects has compounded the belief in their importance, convincing the eminent Fields medalist and Abel Prize–winning mathematician Michael Atiyah, for example, that the final theory of nature must be octonionic. “The real theory which we would like to get to,” he said in 2010, “should include gravity with all these theories in such a way that gravity is seen to be a consequence of the octonions and the exceptional groups.” He added, “It will be hard because we know the octonions are hard, but when you’ve found it, it should be a beautiful theory, and it should be unique.”

Holding e7 constant while transforming the other unit octonions reduces their symmetries to the group SU(3). Günaydin and Gürsey used this fact to build an octonionic model of the strong force acting on a single generation of quarks.

Furey has gone further. In her most recent published paper, which appeared in May in The European Physical Journal C, she consolidated several findings to construct the full Standard Model symmetry group, SU(3) × SU(2) × U(1), for a single generation of particles, with the math producing the correct array of electric charges and other attributes for an electron, neutrino, three up quarks, three down quarks and their anti-particles. The math also suggests a reason why electric charge is quantized in discrete units — essentially, because whole numbers are.

However, in that model’s way of arranging particles, it’s unclear how to naturally extend the model to cover the full three particle generations that exist in nature. But in another new paper that’s now circulating among experts and under review by Physical Letters B, Furey uses 𝕆 to construct the Standard Model’s two unbroken symmetries, SU(3) and U(1). (In nature, SU(2) × U(1) is broken down into U(1) by the Higgs mechanism, a process that imbues particles with mass.) In this case, the symmetries act on all three particle generations and also allow for the existence of particles called sterile neutrinos — candidates for dark matter that physicists are actively searching for now. “The three-generation model only has SU(3) × U(1), so it’s more rudimentary,” Furey told me, pen poised at a whiteboard. “The question is, is there an obvious way to go from the one-generation picture to the three-generation picture? I think there is.”

This is the main question she’s after now. The mathematical physicists Michel Dubois-Violette, Ivan Todorov and Svetla Drenska are also trying to model the three particle generations using a structure that incorporates octonions called the exceptional Jordan algebra. After years of working solo, Furey is beginning to collaborate with researchers who take different approaches, but she prefers to stick with the product of the four division algebras, 𝕆, acting on itself. It’s complicated enough and provides flexibility in the many ways it can be chopped up. Furey’s goal is to find the model that, in hindsight, feels inevitable and that includes mass, the Higgs mechanism, gravity and space-time.

Already, there’s a sense of space-time in the math. She finds that all multiplicative chains of elements of 𝕆 can be generated by 10 matrices called “generators.” Nine of the generators act like spatial dimensions, and the 10th, which has the opposite sign, behaves like time. String theory also predicts 10 space-time dimensions — and the octonions are involved there as well. Whether or how Furey’s work connects to string theory remains to be puzzled out.

So does her future. She’s looking for a faculty job now, but failing that, there’s always the ski slopes or the accordion. “Accordions are the octonions of the music world,” she said — “tragically misunderstood.” She added, “Even if I pursued that, I would always be working on this project.”

The Final Theory

Furey mostly demurred on my more philosophical questions about the relationship between physics and math, such as whether, deep down, they are one and the same. But she is taken with the mystery of why the property of division is so key. She also has a hunch, reflecting a common allergy to infinity, that 𝕆 is actually an approximation that will be replaced, in the final theory, with another, related mathematical system that does not involve the infinite continuum of real numbers.

That’s just intuition talking. But with the Standard Model passing tests to staggering perfection, and no enlightening new particles materializing at the Large Hadron Collider in Europe, a new feeling is in the air, both unsettling and exciting, ushering a return to whiteboards and blackboards. There’s the burgeoning sense that “maybe we have not yet finished the process of fitting the current pieces together,” said Boyle, of the Perimeter Institute. He rates this possibility “more promising than many people realize,” and said it “deserves more attention than it is currently getting, so I am very glad that some people like Cohl are seriously pursuing it.”

Boyle hasn’t himself written about the Standard Model’s possible relationship to the octonions. But like so many others, he admits to hearing their siren song. “I share the hope,” he said, “and even the suspicion, that the octonions may end up playing a role, somehow, in fundamental physics, since they are very beautiful.”

Correction July 22, 2018: A previous version of the “Four Special Number Systems” graphic noted that e1, e2 and e3 are comparable to the quaternions’ i, j and k. In the representation of the Fano plane in the graphic, e1, e2 and e4 are comparable to the quaternions’ i, j and k.

Discovering gravitational waves from neutron stars

Discovering gravitational waves from neutron stars

Rumours have been swirling for weeks that scientists have detected gravitational waves – tiny ripples in space and time – from a source other than colliding black holes. Now we can finally confirm that we’ve observed such waves produced by the violent collision of two massive, ultra-dense stars more than 100m light years from the Earth.

The discovery was made on August 17 by the global network of advanced gravitational-wave interferometers – comprising the twin LIGO detectors in the US and their European cousin, Virgo, in Italy. It is hugely important, not least because it helps solve some big mysteries in astrophysics – including the cause of bright flashes of light known as “gamma ray bursts” and perhaps even the origins of heavy elements such as gold.

As a member of the LIGO scientific collaboration, I was immediately in raptures as soon as I saw the initial data. And the period that followed was definitely the most intense and sleep deprived, but also incredibly exciting, two months of my career.

The announcement comes just weeks after three scientists were awarded the Nobel Prize in Physics for their foundational work leading to the discovery of gravitational waves, first announced in February 2016. Since then, detecting gravitational waves from colliding black holes has started to feel like familiar territory – with four further such events detected. But as far as we know, colliding black holes offer purely a window on the dark side of the universe. We haven’t been able to register light from these events with any other instruments.

But GW170817 – the catchy title for the event of August 17 — changes all that. That’s because the source of the waves this time was two “neutron stars” – incredibly dense stellar remnants the size of a city, each weighing more than the sun. These stars whizzed around each other at a sizeable fraction of the speed of light before merging in a cataclysmic collision that we’ve now seen shake the very fabric of space and time.

Mysteries solved

The cosmic concerto was just beginning, however. Astronomers have long suspected that the merger of two neutron stars could be the overture to a short gamma ray burst – an intense flash of gamma-ray light that releases more energy in a fraction of a second than the sun will pump out in ten billion years. For several decades we have observed these gamma ray bursts, but without knowing for sure what causes them.

However, just 1.7 seconds after the gravitational waves from GW170817 arrived at the Earth, NASA’s Fermi satellite observed a short burst of gamma rays in the same general region of the sky. LIGO and Virgo had found the smoking gun, and the link between neutron star collisions and short gamma ray bursts was finally and clearly established.

The combination of gravitational-wave and gamma-ray observations allowed the position of the cosmic explosion to be pinpointed to less than 30 square degrees on the sky – or about 100 times the size of the full moon. This, in turn, allowed a whole barrage of astronomical telescopes sensitive to light across the entire electromagnetic spectrum to search this small patch of sky for the aftermath of the explosion. And sure enough this was found – in an unfashionable backwater towards the edge of a fairly unassuming galaxy called NGC4993, in the constellation of Hydra.

Over the next few days and weeks astronomers watched agog as the embers from the explosion glowed brightly and faded, beautifully matching the pattern expected for a so-called “kilonova”. This is produced when material rich in subatomic particles known as neutrons from the initial merger is ejected at great speed by the gamma ray burst. This ploughs into the surrounding region of space, triggering the production of heavy radioactive elements.

These unstable elements typically split up (decay) to a stable state by emitting radiation. This is what causes the glow of the kilonova, which we have now confirmed by mapping it out in exquisite detail. Our observations also strongly support the theory that the stable end-products of these chains of reactions include copious amounts of precious metals like gold and platinum. While we’ve suspected neutron stars to be key to producing these elements in space, that hypothesis now looks a whole lot more convincing. Indeed, the kilonova that formed from the embers of GW170817 could have produced as much gold as the entire mass of the Earth – that is 1,000 trillion tonnes.

By observing a kilonova “up close and personal” for the very first time, and seeing how well it fits into the unfolding astronomical storyboard that began with the neutron star merger, astronomers have taken a huge leap forward in our understanding of these violent cosmic events.

The idea that we are all made of stardust is increasingly appreciated in popular culture – in everything from documentaries to song lyrics. But the mind-blowing concept that the gold in our wedding rings and Rolex watches is made of neutron stardust is about to catch on. Perhaps even more exciting, however, is the enormous potential now unlocked by this radical, new approach to studying the cosmos.

By working together collaboratively – using instruments that operate not just across the entire spectrum of light but are sensitive to gravitational waves and even neutrinos too – astronomers are poised to fully open a completely new “multi-messenger” window on the universe, with many further discoveries to be made and cosmic mysteries to be solved. For example, we have already used our observations to make the first ever joint measurement of the expansion rate of the universe, using both gravitational waves and light. Our paper will appear in Nature on October 16.

More results will also surely follow soon. The exciting new era of multi-messenger astronomy just started with a bang.

Time to look closer to home

Time to look closer to home

Immigration is at the heart of the Brexit debate. It’s a large reason why economic arguments have failed to sway voters. Despite warnings of the high economic costs of leaving the EU (warnings supported by a majority of economists), immigration has continued to have a powerful influence – and is perhaps the major reason why the opinion polls have become so close in recent weeks.

The immigration debate is quite polarised. On one side, the Remain campaign either ducks the issue or focuses on the benefits that immigration brings to the economy as a whole. On the other side, the Leave camp focuses on its costs and plays into fears (often in a cruel and cynical way) that migrants are putting pressure on public services that are already at breaking point. At least among Labour supporters intending to vote leave, immigration also appears to be a key issue.

The need for a more even-handed assessment of the costs and benefits of immigration is sorely needed. While immigration brings undoubted benefits, it also places pressures on particular communities. It also feeds a sense of fear and frustration especially among the working class about an economy that is not delivering. It comes to symbolise the lack of power and opportunity available to them.

It is all too easy to scapegoat migrants as the problem, but there are deeper-lying problems of economic and social policy at play that harm all citizens, wherever they are born.

In particular, five years of austerity policies under the present government’s leadership, with spending on public services at historic lows and a failure to invest in housing and infrastructure, is the main threat to prosperity for all. Yes, immigration creates tensions, but mostly when it occurs in a context where austerity is entrenched. It can be managed most effectively and to the advantage of everyone if austerity is rejected.

Beyond the Brexit debate, there is a wider need to rethink economic policy and the economy more generally in ways that address the genuine concerns of the working class. In this way, the cynicism and nationalism attached to the immigration question can be extinguished.


The case for immigration is clear. Migrants are net contributors to the public finances – they tend to pay into the system more than they take out. Migrants, for example, are likely to retire in their home countries rather than in the UK.

By adding to spending levels, migrants also create jobs. Migrants also bring vital skills (for example in nursing and building trades) and can plug important shortages in the labour market (such as in the caring sectors).

Lurid and cynical headlines denigrating migrants as benefit tourists conveniently ignore the economic as well as social benefits that migrants make in different parts of the nation.


The downsides to immigration comes through the pressure it can place on public services. Housing and schooling stand out in this respect. Where communities face particularly high levels of immigration, housing waiting lists may rise and school places may become fewer in number. These pressures can create tensions.

But here the problem is not immigration per se; it is the lack of adequate housing and schooling. Even without immigration, housing needs would be unmet in some areas, particularly in London.

UK public services across the board have undergone a significant number of cuts in recent years. Local government, in particular, has faced severe spending cuts. This is important as local government provides a host of key services like social care for children and pensioners.

There is also evidence that public spending cuts have fallen disproportionately on the most disadvantaged areas. Austerity policies, in this case, have widened social and economic inequalities within and between regions in the UK. They have also had wider economic effects – restricting growth and the tax receipts that fund public services.

Migrants have therefore entered a country that is without the public services to support many existing local communities. And their presence has brought the problems of housing and schooling together with health and childcare more to the fore. These problems are not the fault of migrants; rather they reflect years of chronic underinvestment in the public sector.

Frustration, not immigration

The reason why many working class communities are being won over by the Leave campaign is not immigration, but rather a shared sense of frustration about the lack of adequate public services.

And this frustration has been intensified by the lack of good employment opportunities. While employment has risen and unemployment fallen, those in work have suffered an unprecedented reduction in real payUnderemploymentenforced self-employment, and the rise of zero hours contracts have added to the problems faced by the employed.

Immigration, in effect, has held up a mirror to the UK’s low wage economy. It has exposed the bleak prospects in terms of wages and working conditions available to large swathes of the workforce. And from the perspective of those living in working class communities, migrant workers taking low paid and insecure work has reinforced the lack of opportunity available to them.

Voting to Leave the EU can therefore be seen as an act of protest by sections of the working class against a system that works for neither indigenous nor migrant workers.

But immigration should not be used to excuse the government for its own poor record in supporting the needs of UK workers. It should also not be used to support the case for leaving the EU, on the basis that a Tory-led government would continue to impose austerity policies on the UK in the event of Brexit, making the lives of UK workers much worse.

Rather, immigration should be used as a way to oppose austerity and push for change. It should be used to highlight that another economic policy is needed – not one based on public spending cuts and a race to the bottom, but rather one based on inclusion and shared prosperity. It must be recognised that all citizens – migrant or otherwise – have needs and rights for well-funded public services and decent working conditions. Supporting immigration and voting Remain, above all, should be about building an alternative economy that serves everyone’s interests.


Trying to understand perception by understanding neurons

Trying to understand perception by understanding neurons

When he talks about where his fields of neuroscience and neuropsychology have taken a wrong turn, David Poeppel of New York University doesn’t mince words. “There’s an orgy of data but very little understanding,” he said to a packed room at the American Association for the Advancement of Science annual meeting in February. He decried the “epistemological sterility” of experiments that do piecework measurements of the brain’s wiring in the laboratory but are divorced from any guiding theories about behaviors and psychological phenomena in the natural world. It’s delusional, he said, to think that simply adding up those pieces will eventually yield a meaningful picture of complex thought.

He pointed to the example of Caenorhabditis elegans, the roundworm that is one of the most studied lab animals. “Here’s an organism that we literally know inside out,” he said, because science has worked out every one of its 302 neurons, all of their connections and the worm’s full genome. “But we have no satisfying model for the behavior of C. elegans,” he said. “We’re missing something.”

Poeppel is more than a gadfly attacking the status quo: Recently, his laboratory used real-world behavior to guide the design of a brain-activity study that led to a surprising discovery in the neuroscience of speech.

Critiques like Poeppel’s go back for decades. In the 1970s, the influential computational neuroscientist David Marr argued that brains and other information processing systems needed to be studied in terms of the specific problems they face and the solutions they find (what he called a computational level of analysis) to yield answers about the reasons behind their behavior. Looking only at what the systems do (an algorithmic analysis) or how they physically do it (an implementational analysis) is not enough. As Marr wrote in his posthumously published book, Vision: A Computational Investigation into the Human Representation and Processing of Visual Information, “… trying to understand perception by understanding neurons is like trying to understand a bird’s flight by understanding only feathers. It cannot be done.”

Poeppel and his co-authors carried on this tradition in a paper that appeared in Neuron last year. In it, they review ways in which overreliance on the “compelling” tools for manipulating and measuring the brain can lead scientists astray. Many types of experiments, for example, try to map specific patterns of neural activity to specific behaviors — by showing, say, that when a rat is choosing which way to run in a maze, neurons fire more often in a certain area of the brain. But those experiments could easily overlook what’s happening in the rest of the brain when the rat is making that choice, which might be just as relevant. Or they could miss that the neurons fire in the same way when the rat is stressed, so maybe it has nothing to do with making a choice. Worst of all, the experiment could ultimately be meaningless if the studied behavior doesn’t accurately reflect anything that happens naturally: A rat navigating a laboratory maze may be in a completely different mental state than one squirming through holes in the wild, so generalizing from the results is risky. Good experimental designs can go only so far to remedy these problems.

The common rebuttal to his criticism is that the huge advances that neuroscience has made are largely because of the kinds of studies he faults. Poeppel acknowledges this but maintains that neuroscience would know more about complex cognitive and emotional phenomena (rather than neural and genomic minutiae) if research started more often with a systematic analysis of the goals behind relevant behaviors, rather than jumping to manipulations of the neurons involved in their production. If nothing else, that analysis could help to target the research in productive ways.

That is what Poeppel and M. Florencia Assaneo, a postdoc in his laboratory, accomplished recently, as described in their paper for Science Advances. Their laboratory studies language processing — “how sound waves put ideas into your head,” in Poeppel’s words.

When people listen to speech, their ears translate the sound waves into neural signals that are then processed and interpreted by various parts of the brain, starting with the auditory cortex. Years of neurophysiological studies have observed that the waves of neural activity in the auditory cortex lock onto the audio signal’s “envelope” — essentially, the frequency with which the loudness changes. (As Poeppel put it, “The brain waves surf on the sound waves.”) By very faithfully “entraining” on the audio signal in this way, the brain presumably segments the speech into manageable chunks for processing.

Image of the brain showing the speech-motor cortex and the auditory cortex regions

These images offer an example of where the auditory and speech motor cortical areas can be found in the human brain. Although the left hemisphere is most specialized for speech and language, some processing may occur bilaterally.

DOI: 10.1126/sciadv.aao3842

More curiously, some studies have seen that when people listen to spoken language, an entrained signal also shows up in the part of the motor cortex that controls speech. It is almost as though they are silently speaking along with the heard words, perhaps to aid comprehension — although Assaneo emphasized to me that any interpretation is highly controversial. Scientists can only speculate about what is really happening, in part because the motor-center entrainment doesn’t always occur. And it’s been a mystery whether the auditory cortex is directly driving the pattern in the motor cortex or whether some combination of activities elsewhere in the brain is responsible.

Assaneo and Poeppel took a fresh approach with a hypothesis that tied the real-world behavior of language to the observed neurophysiology. They noticed that the frequency of the entrained signals in the auditory cortex is commonly about 4.5 hertz — which also happens to be the mean rate at which syllables are spoken in languages around the world.

In her experiments, Assaneo had people listen to nonsensical strings of syllables played at rates between 2 and 7 hertz while she measured the activity in their auditory and speech motor cortices. (She used nonsense syllables so that the brain would have no semantic response to the speech, in case that might indirectly affect the motor areas. “When we perceive intelligible speech, the brain network being activated is more complex and extended,” she explained.) If the signals in the auditory cortex drive those in the motor cortex, then they should stay entrained to each other throughout the tests. If the motor cortex signal is independent, it should not change.

But what Assaneo observed was rather more interesting and surprising, Poeppel said: The auditory and speech motor activities did stay entrained, but only up to about 5 hertz. Once the audio changed faster than spoken language typically does, the motor cortex dropped out of sync. A computational model later confirmed that these results were consistent with the idea that the motor cortex has its own internal oscillator that naturally operates at around 4 to 5 hertz.

These complex results vindicate the researchers’ behavior-linked approach in several ways, according to Poeppel and Assaneo. Their equipment monitors 160 channels in the brain at sampling rates down to 1 hertz; it produces so much neurophysiological data that if they had simply looked for correlations in it, they would have undoubtedly found spurious ones. Only by starting with information drawn from linguistics and language behavior — the observation that there is something special about signals in the 4-to-5-hertz range because they show up in all spoken languages — did the researchers know to narrow their search for meaningful data to that range. And the specific interactions of the auditory and motor cortices they found are so nuanced that the researchers would never have thought to look for those on their own.

According to Assaneo, they are continuing to investigate how the rhythms of the brain and speech interact. Among other questions, they are curious about whether more-natural listening experiences might lift the limits on the association they saw. “It could be possible that intelligibility or attention increases the frequency range of the entrainment,” she said.

Challenges of keeping the masses moving

Cities worldwide face the problems and possibilities of “volume”: the stacking and moving of people and things within booming central business districts. We see this especially around mass public transport hubs.

As cities grow, they also become more vertical. They are expanding underground through rail corridors and above ground into the tall buildings that shape city skylines. Cities are deep as well as wide.

The urban geographer Stephen Graham describes cities as both “vertically stacked” and “vertically sprawled”, laced together by vertical and horizontal transport systems.

People flow in large cities is not only about how people move horizontally on rail and road networks into and out of city centres. It also includes vertical transport systems. These are the elevators, escalators and moving sidewalks that commuters use every day to get from the underground to the surface street level.

Major transport hubs are where many vertical and horizontal transport systems converge. It’s here that people flows are most dense.

But many large cities face the twin challenges of ageing infrastructure and increased volumes of people flowing through transport hubs. Problems of congestion, overcrowding, delays and even lockouts are becoming more common.

Governments are increasingly looking for ways to squeeze more capacity out of existing infrastructure networks.

Can we increase capacity by changing behaviour?

For the last three years, Transport for London (TfL) has been running standing-only escalator trials. The aim is to see if changing commuter behaviour might increase “throughput” of people and reduce delays.

London has some of the deepest underground stations in the world. This means the Tube system is heavily reliant on vertical transport such as escalators. But a long-standing convention means people only stand on the right side and allow others to walk up on the left.

In a trial at Holborn Station, one of London’s deepest at 23 metres, commuters were asked to stand on both sides during morning rush hour.

The results of the trials showed that changing commuter behaviour could improve throughput by increasing capacity by as much as 30% at peak times. But this works only in Tube stations with very tall escalators. At stations with escalators less than 18 metres high, like Canary Wharf, the trials found the opposite – standing would only increase congestion across the network.


By standing only, 30% more people could fit on an escalator in the trial at Holborn Station.

The difference is down to human behaviour. People are simply less willing to walk up very tall escalators. This means a standing-only policy across the network won’t improve people flow uniformly and could even make congestion worse.

Is people movement data a solution?

With the introduction of ticketless transport cards it’s now possible to gather more data about people flow through busy transport hubs as we tap on and off.

Tracking commuters’ in-station journeys through their Wi-Fi enabled devices, such as smart phones, can also offer a detailed picture of movement between platforms, congestion and delays.

Transport for London has already conducted its first Wi-Fi tracking trial in the London Underground.

Issues of privacy loom large in harvesting mobile data from individual devices. Still, there’s enormous potential to use this data to resolve issues of overcrowding and inform commuters about delays and congestion en route.

London’s transport authority hopes the data from tracking users’ phones will help ease congestion, plan better timetables and improve station designs.

Governments are also increasingly turning to consultancy firms that specialise in simulation modelling of people flow. That’s everything from check-in queues and processing at terminals, to route tracking and passenger flow on escalators.

Using data analytics, people movement specialists identify movement patterns, count footfall and analyse commuter behaviour. In existing infrastructure, they look to achieve “efficiencies” through changes to scheduling and routing, and assessing the directional flow of commuters.

Construction and engineering companies are also beginning to employ people movement specialists during the design phase of large infrastructure projects.

Beijing’s Daxing airport, due for completion in 2020, will be the largest transport hub in China. It’s also the first major infrastructure project to use crowd simulation and analysis software during the design process to test anticipated volume against capacity.

The advice of people movement specialists can have significant impacts on physical infrastructure. This involves aspects such as the width of platforms, number and placement of gates, and the layout and positioning of vertical transport, such as escalators.

Movement analytics is becoming big business

People movement analytics is becoming big business, especially where financialisation of public assets is increasing. This means infrastructure is being developed through complex public-private partnership models. As a result, transport hubs are now also commercial spaces for retail, leisure and business activities.

Commuters are no longer only in transit when they make their way through these spaces. They are potential consumers as they move through the retail concourse in many of these developments.

In an era of “digital disruption”, which is particularly affecting the retail sector, information about commuter mobility has potential commercial value. The application of data analytics to people flow and its use by the people movement industry to achieve “efficiencies” needs careful scrutiny to ensure benefits beyond commercial gain.

At the same time, mobility data may well help our increasingly vertical cities to keep flowing up, down and across.