Latest Posts

Time to look closer to home

Time to look closer to home

Immigration is at the heart of the Brexit debate. It’s a large reason why economic arguments have failed to sway voters. Despite warnings of the high economic costs of leaving the EU (warnings supported by a majority of economists), immigration has continued to have a powerful influence – and is perhaps the major reason why the opinion polls have become so close in recent weeks.

The immigration debate is quite polarised. On one side, the Remain campaign either ducks the issue or focuses on the benefits that immigration brings to the economy as a whole. On the other side, the Leave camp focuses on its costs and plays into fears (often in a cruel and cynical way) that migrants are putting pressure on public services that are already at breaking point. At least among Labour supporters intending to vote leave, immigration also appears to be a key issue.

The need for a more even-handed assessment of the costs and benefits of immigration is sorely needed. While immigration brings undoubted benefits, it also places pressures on particular communities. It also feeds a sense of fear and frustration especially among the working class about an economy that is not delivering. It comes to symbolise the lack of power and opportunity available to them.

It is all too easy to scapegoat migrants as the problem, but there are deeper-lying problems of economic and social policy at play that harm all citizens, wherever they are born.

In particular, five years of austerity policies under the present government’s leadership, with spending on public services at historic lows and a failure to invest in housing and infrastructure, is the main threat to prosperity for all. Yes, immigration creates tensions, but mostly when it occurs in a context where austerity is entrenched. It can be managed most effectively and to the advantage of everyone if austerity is rejected.

Beyond the Brexit debate, there is a wider need to rethink economic policy and the economy more generally in ways that address the genuine concerns of the working class. In this way, the cynicism and nationalism attached to the immigration question can be extinguished.

Benefits

The case for immigration is clear. Migrants are net contributors to the public finances – they tend to pay into the system more than they take out. Migrants, for example, are likely to retire in their home countries rather than in the UK.

By adding to spending levels, migrants also create jobs. Migrants also bring vital skills (for example in nursing and building trades) and can plug important shortages in the labour market (such as in the caring sectors).

Lurid and cynical headlines denigrating migrants as benefit tourists conveniently ignore the economic as well as social benefits that migrants make in different parts of the nation.

Pressures

The downsides to immigration comes through the pressure it can place on public services. Housing and schooling stand out in this respect. Where communities face particularly high levels of immigration, housing waiting lists may rise and school places may become fewer in number. These pressures can create tensions.

But here the problem is not immigration per se; it is the lack of adequate housing and schooling. Even without immigration, housing needs would be unmet in some areas, particularly in London.

UK public services across the board have undergone a significant number of cuts in recent years. Local government, in particular, has faced severe spending cuts. This is important as local government provides a host of key services like social care for children and pensioners.

There is also evidence that public spending cuts have fallen disproportionately on the most disadvantaged areas. Austerity policies, in this case, have widened social and economic inequalities within and between regions in the UK. They have also had wider economic effects – restricting growth and the tax receipts that fund public services.

Migrants have therefore entered a country that is without the public services to support many existing local communities. And their presence has brought the problems of housing and schooling together with health and childcare more to the fore. These problems are not the fault of migrants; rather they reflect years of chronic underinvestment in the public sector.

Frustration, not immigration

The reason why many working class communities are being won over by the Leave campaign is not immigration, but rather a shared sense of frustration about the lack of adequate public services.

And this frustration has been intensified by the lack of good employment opportunities. While employment has risen and unemployment fallen, those in work have suffered an unprecedented reduction in real payUnderemploymentenforced self-employment, and the rise of zero hours contracts have added to the problems faced by the employed.

Immigration, in effect, has held up a mirror to the UK’s low wage economy. It has exposed the bleak prospects in terms of wages and working conditions available to large swathes of the workforce. And from the perspective of those living in working class communities, migrant workers taking low paid and insecure work has reinforced the lack of opportunity available to them.

Voting to Leave the EU can therefore be seen as an act of protest by sections of the working class against a system that works for neither indigenous nor migrant workers.

But immigration should not be used to excuse the government for its own poor record in supporting the needs of UK workers. It should also not be used to support the case for leaving the EU, on the basis that a Tory-led government would continue to impose austerity policies on the UK in the event of Brexit, making the lives of UK workers much worse.

Rather, immigration should be used as a way to oppose austerity and push for change. It should be used to highlight that another economic policy is needed – not one based on public spending cuts and a race to the bottom, but rather one based on inclusion and shared prosperity. It must be recognised that all citizens – migrant or otherwise – have needs and rights for well-funded public services and decent working conditions. Supporting immigration and voting Remain, above all, should be about building an alternative economy that serves everyone’s interests.

 

Trying to understand perception by understanding neurons

Trying to understand perception by understanding neurons

When he talks about where his fields of neuroscience and neuropsychology have taken a wrong turn, David Poeppel of New York University doesn’t mince words. “There’s an orgy of data but very little understanding,” he said to a packed room at the American Association for the Advancement of Science annual meeting in February. He decried the “epistemological sterility” of experiments that do piecework measurements of the brain’s wiring in the laboratory but are divorced from any guiding theories about behaviors and psychological phenomena in the natural world. It’s delusional, he said, to think that simply adding up those pieces will eventually yield a meaningful picture of complex thought.

He pointed to the example of Caenorhabditis elegans, the roundworm that is one of the most studied lab animals. “Here’s an organism that we literally know inside out,” he said, because science has worked out every one of its 302 neurons, all of their connections and the worm’s full genome. “But we have no satisfying model for the behavior of C. elegans,” he said. “We’re missing something.”

Poeppel is more than a gadfly attacking the status quo: Recently, his laboratory used real-world behavior to guide the design of a brain-activity study that led to a surprising discovery in the neuroscience of speech.

Critiques like Poeppel’s go back for decades. In the 1970s, the influential computational neuroscientist David Marr argued that brains and other information processing systems needed to be studied in terms of the specific problems they face and the solutions they find (what he called a computational level of analysis) to yield answers about the reasons behind their behavior. Looking only at what the systems do (an algorithmic analysis) or how they physically do it (an implementational analysis) is not enough. As Marr wrote in his posthumously published book, Vision: A Computational Investigation into the Human Representation and Processing of Visual Information, “… trying to understand perception by understanding neurons is like trying to understand a bird’s flight by understanding only feathers. It cannot be done.”

Poeppel and his co-authors carried on this tradition in a paper that appeared in Neuron last year. In it, they review ways in which overreliance on the “compelling” tools for manipulating and measuring the brain can lead scientists astray. Many types of experiments, for example, try to map specific patterns of neural activity to specific behaviors — by showing, say, that when a rat is choosing which way to run in a maze, neurons fire more often in a certain area of the brain. But those experiments could easily overlook what’s happening in the rest of the brain when the rat is making that choice, which might be just as relevant. Or they could miss that the neurons fire in the same way when the rat is stressed, so maybe it has nothing to do with making a choice. Worst of all, the experiment could ultimately be meaningless if the studied behavior doesn’t accurately reflect anything that happens naturally: A rat navigating a laboratory maze may be in a completely different mental state than one squirming through holes in the wild, so generalizing from the results is risky. Good experimental designs can go only so far to remedy these problems.

The common rebuttal to his criticism is that the huge advances that neuroscience has made are largely because of the kinds of studies he faults. Poeppel acknowledges this but maintains that neuroscience would know more about complex cognitive and emotional phenomena (rather than neural and genomic minutiae) if research started more often with a systematic analysis of the goals behind relevant behaviors, rather than jumping to manipulations of the neurons involved in their production. If nothing else, that analysis could help to target the research in productive ways.

That is what Poeppel and M. Florencia Assaneo, a postdoc in his laboratory, accomplished recently, as described in their paper for Science Advances. Their laboratory studies language processing — “how sound waves put ideas into your head,” in Poeppel’s words.

When people listen to speech, their ears translate the sound waves into neural signals that are then processed and interpreted by various parts of the brain, starting with the auditory cortex. Years of neurophysiological studies have observed that the waves of neural activity in the auditory cortex lock onto the audio signal’s “envelope” — essentially, the frequency with which the loudness changes. (As Poeppel put it, “The brain waves surf on the sound waves.”) By very faithfully “entraining” on the audio signal in this way, the brain presumably segments the speech into manageable chunks for processing.

Image of the brain showing the speech-motor cortex and the auditory cortex regions

These images offer an example of where the auditory and speech motor cortical areas can be found in the human brain. Although the left hemisphere is most specialized for speech and language, some processing may occur bilaterally.

DOI: 10.1126/sciadv.aao3842

More curiously, some studies have seen that when people listen to spoken language, an entrained signal also shows up in the part of the motor cortex that controls speech. It is almost as though they are silently speaking along with the heard words, perhaps to aid comprehension — although Assaneo emphasized to me that any interpretation is highly controversial. Scientists can only speculate about what is really happening, in part because the motor-center entrainment doesn’t always occur. And it’s been a mystery whether the auditory cortex is directly driving the pattern in the motor cortex or whether some combination of activities elsewhere in the brain is responsible.

Assaneo and Poeppel took a fresh approach with a hypothesis that tied the real-world behavior of language to the observed neurophysiology. They noticed that the frequency of the entrained signals in the auditory cortex is commonly about 4.5 hertz — which also happens to be the mean rate at which syllables are spoken in languages around the world.

In her experiments, Assaneo had people listen to nonsensical strings of syllables played at rates between 2 and 7 hertz while she measured the activity in their auditory and speech motor cortices. (She used nonsense syllables so that the brain would have no semantic response to the speech, in case that might indirectly affect the motor areas. “When we perceive intelligible speech, the brain network being activated is more complex and extended,” she explained.) If the signals in the auditory cortex drive those in the motor cortex, then they should stay entrained to each other throughout the tests. If the motor cortex signal is independent, it should not change.

But what Assaneo observed was rather more interesting and surprising, Poeppel said: The auditory and speech motor activities did stay entrained, but only up to about 5 hertz. Once the audio changed faster than spoken language typically does, the motor cortex dropped out of sync. A computational model later confirmed that these results were consistent with the idea that the motor cortex has its own internal oscillator that naturally operates at around 4 to 5 hertz.

These complex results vindicate the researchers’ behavior-linked approach in several ways, according to Poeppel and Assaneo. Their equipment monitors 160 channels in the brain at sampling rates down to 1 hertz; it produces so much neurophysiological data that if they had simply looked for correlations in it, they would have undoubtedly found spurious ones. Only by starting with information drawn from linguistics and language behavior — the observation that there is something special about signals in the 4-to-5-hertz range because they show up in all spoken languages — did the researchers know to narrow their search for meaningful data to that range. And the specific interactions of the auditory and motor cortices they found are so nuanced that the researchers would never have thought to look for those on their own.

According to Assaneo, they are continuing to investigate how the rhythms of the brain and speech interact. Among other questions, they are curious about whether more-natural listening experiences might lift the limits on the association they saw. “It could be possible that intelligibility or attention increases the frequency range of the entrainment,” she said.

Challenges of keeping the masses moving

Cities worldwide face the problems and possibilities of “volume”: the stacking and moving of people and things within booming central business districts. We see this especially around mass public transport hubs.

As cities grow, they also become more vertical. They are expanding underground through rail corridors and above ground into the tall buildings that shape city skylines. Cities are deep as well as wide.

The urban geographer Stephen Graham describes cities as both “vertically stacked” and “vertically sprawled”, laced together by vertical and horizontal transport systems.

People flow in large cities is not only about how people move horizontally on rail and road networks into and out of city centres. It also includes vertical transport systems. These are the elevators, escalators and moving sidewalks that commuters use every day to get from the underground to the surface street level.

Major transport hubs are where many vertical and horizontal transport systems converge. It’s here that people flows are most dense.

But many large cities face the twin challenges of ageing infrastructure and increased volumes of people flowing through transport hubs. Problems of congestion, overcrowding, delays and even lockouts are becoming more common.

Governments are increasingly looking for ways to squeeze more capacity out of existing infrastructure networks.

Can we increase capacity by changing behaviour?

For the last three years, Transport for London (TfL) has been running standing-only escalator trials. The aim is to see if changing commuter behaviour might increase “throughput” of people and reduce delays.

London has some of the deepest underground stations in the world. This means the Tube system is heavily reliant on vertical transport such as escalators. But a long-standing convention means people only stand on the right side and allow others to walk up on the left.

In a trial at Holborn Station, one of London’s deepest at 23 metres, commuters were asked to stand on both sides during morning rush hour.

The results of the trials showed that changing commuter behaviour could improve throughput by increasing capacity by as much as 30% at peak times. But this works only in Tube stations with very tall escalators. At stations with escalators less than 18 metres high, like Canary Wharf, the trials found the opposite – standing would only increase congestion across the network.

 

By standing only, 30% more people could fit on an escalator in the trial at Holborn Station.

The difference is down to human behaviour. People are simply less willing to walk up very tall escalators. This means a standing-only policy across the network won’t improve people flow uniformly and could even make congestion worse.

Is people movement data a solution?

With the introduction of ticketless transport cards it’s now possible to gather more data about people flow through busy transport hubs as we tap on and off.

Tracking commuters’ in-station journeys through their Wi-Fi enabled devices, such as smart phones, can also offer a detailed picture of movement between platforms, congestion and delays.

Transport for London has already conducted its first Wi-Fi tracking trial in the London Underground.

Issues of privacy loom large in harvesting mobile data from individual devices. Still, there’s enormous potential to use this data to resolve issues of overcrowding and inform commuters about delays and congestion en route.

London’s transport authority hopes the data from tracking users’ phones will help ease congestion, plan better timetables and improve station designs.

Governments are also increasingly turning to consultancy firms that specialise in simulation modelling of people flow. That’s everything from check-in queues and processing at terminals, to route tracking and passenger flow on escalators.

Using data analytics, people movement specialists identify movement patterns, count footfall and analyse commuter behaviour. In existing infrastructure, they look to achieve “efficiencies” through changes to scheduling and routing, and assessing the directional flow of commuters.

Construction and engineering companies are also beginning to employ people movement specialists during the design phase of large infrastructure projects.

Beijing’s Daxing airport, due for completion in 2020, will be the largest transport hub in China. It’s also the first major infrastructure project to use crowd simulation and analysis software during the design process to test anticipated volume against capacity.

The advice of people movement specialists can have significant impacts on physical infrastructure. This involves aspects such as the width of platforms, number and placement of gates, and the layout and positioning of vertical transport, such as escalators.

Movement analytics is becoming big business

People movement analytics is becoming big business, especially where financialisation of public assets is increasing. This means infrastructure is being developed through complex public-private partnership models. As a result, transport hubs are now also commercial spaces for retail, leisure and business activities.

Commuters are no longer only in transit when they make their way through these spaces. They are potential consumers as they move through the retail concourse in many of these developments.

In an era of “digital disruption”, which is particularly affecting the retail sector, information about commuter mobility has potential commercial value. The application of data analytics to people flow and its use by the people movement industry to achieve “efficiencies” needs careful scrutiny to ensure benefits beyond commercial gain.

At the same time, mobility data may well help our increasingly vertical cities to keep flowing up, down and across.

Emerging Markets Under Pressure

Emerging markets have come under a bit of pressure recently, with the combination of the dollar’s rise and higher U.S. ten year rates serving as the trigger. The Governor of the Reserve Bank of India has—rather remarkably—even called on the U.S. Federal Reserve to slow the pace of its quantitative tightening to give emerging economies a bit of a break. (He could have equally called on the Administration to change its fiscal policy so as to reduce issuance, but the Fed is presumably a softer target.)

Yet the pressure on emerging economies hasn’t been uniform (the exchange rate moves in the chart are through Wednesday, June 13th; they don’t reflect Thursday’s selloff).

Emerging Market Currencies

With the help of Benjamin Della Rocca, a research analyst at the Council on Foreign Relations, I split emerging economies into three main groupings:

  1. Oil importing economies with current account deficits
  2. Oil importing economies with significant current account surpluses (a group consisting of emerging Asian economies)
  3. And oil-exporting economies
Change in Currency Value

But there is clearly a divide between oil importers with surpluses (basically, most of East Asia) and oil importers with deficits. The emerging economies facing the most pressure, not surprisingly, are those with growing current account deficts and large external funding needs, notably Turkey and Argentina.

Current Account Balances of Deficit Countries

In emerging-market land, at least, trade deficits still matter.

In fact, those that have experienced the most depreciation tend to share the following vulnerabilities:

  • A current account deficit
  • A high level of liability dollarization (whether in the government’s liabilities, or the corporate sector)
  • Limited reserves
  • Net oil imports
  • Relatively little trade exposure to the U.S., leaving little to gain from a stimulusinduced spike in U.S. demand
  • Doubts about their commitments to deliver their inflation targets, and thus the credibility of their monetary policy frameworks.

It is all a relatively familiar list.

Though to be fair, Brazil has faced heavy depreciation pressure recently even though it has brought its current account down significantly since 2014.*  Part of the real’s depreciation is a function of the fact that Brazil and Argentina compete in a host of markets, and Brazil must allow some depreciation to keep pace with Argentina. Part of it may be a function of market dynamics too, as investors pull out of funds with emerging market exposure, amplifying down moves. And of course, part of it comes from increasingly pessimistic expectations for Brazil’s ongoing economic recovery—driven by uncertainty ahead the coming presidential elections together with a quite high level of domestic debt.

And for Mexico, well, elections are just around the corner and uncertainty about the future of NAFTA can’t be helping…

* Brazil also benefits from having much higher reserves than either Turkey or Argentina.  Its reserves are sufficient to cover the foreign currency debt of its government as well as its large state banks and firms in full.  This has given the central bank the capacity to sell local currency swaps to help domestic firms (and no doubt foreign investors holding domestic currency denominated bonds) hedge in times of stress.  But Brazil’s reserve position is a topic best left for another time.

Why believe the “ten-percent myth” Human Brain usage?

Why believe the “ten-percent myth” Human Brain usage?

You may have heard that humans only use ten percent of their brain, and that if you could unlock the rest of your brainpower, you could do so much more. You could become a super genius, or acquire psychic powers like mind reading and telekinesis.

This “ten-percent myth” has inspired many references in the cultural imagination. In the 2014 movie Lucy, for example, a woman develops godlike powers thanks to drugs that unleash the previously inaccessible 90 percent of her brain.

Many people believe the myth, too: about 65 percent of Americans, according to a 2013 survey conducted by the Michael J. Fox Foundation for Parkinson’s Research. In another study that asked students what percentage of the brain people used, about one third of the psychology majors answered “10 percent.”

Contrary to the ten-percent myth, however, scientists have shown that humans use their entire brain throughout each day.

There are several threads of evidence debunking the ten-percent myth.

Neuropsychology

Neuropsychology studies how the anatomy of the brain affects someone’s behavior, emotion, and cognition.

Over the years, brain scientists have shown that different parts of the brain are responsible for specific functions, whether it’s recognizing colors or problem solving. Contrary to the ten-percent myth, scientists have proven that every part of the brain is integral for our daily functioning thanks to brain imaging techniques like positron emission tomography and functional magnetic resonance imaging.

Research has yet to find a brain area that is completely inactive. Even studies that measure activity at the level of single neurons have not revealed any inactive areas of the brain.

Many brain imaging studies that measure brain activity when a person is doing a specific task show how different parts of the brain work together.

For example, while you are reading this text on your smartphone, some parts of your brain, including those responsible for vision, reading comprehension, and holding your phone, will be more active.

Some brain images, however, unintentionally lend support to the ten-percent myth because they often show small bright splotches on an otherwise gray brain. This may imply that only the bright spots have brain activity, but that isn’t the case.

Rather, the colored splotches represent brain areas that are more active when someone’s doing a task compared to when they’re not, with the gray spots still being active but to a lesser degree.

A more direct counter to the ten-percent myth lies in individuals who have suffered brain damage – like through a stroke, head trauma, or carbon monoxide poisoning – and what they can no longer do, or do as well, as a result of that damage. If the ten percent myth is true, then damage to many parts of our brain shouldn’t affect your daily functioning.

Studies have shown that damaging a very small part of the brain may have devastating consequences. If someone experiences damage to Broca’s area, for example, they can understand language but can’t properly form words or speak fluently.

In one highly publicized case, a woman in Florida permanently lost her “capacity for thoughts, perceptions, memories, and emotions that are the very essence of being human” when a lack of oxygen destroyed half of her cerebrum – which makes up about 85 percent of the brain.

Evolutionary Arguments

Another line of evidence against the ten-percent myth comes from evolution. The adult brain only constitutes two percent of body mass, yet it consumes over 20 percent of the body’s energy. In comparison, the adult brains of many vertebrate species – including some fish, reptiles, birds, and mammals – consume two to eight percent of their body’s energy.

The brain has been shaped by millions of years of natural selection, which passes down favorable traits to increase likelihood of survival. It is unlikely that the body would dedicate so much of its energy to keep an entire brain functioning if it only uses 10 percent of the brain.

The Origin of the Myth

Even with ample evidence suggesting the contrary, why do many people still believe that humans only use ten percent of their brains? It’s unclear how the myth spread in the first place, but it has been popularized by self-help books, and may even also grounding in older, flawed, neuroscience studies.

The main allure of the ten-percent myth is the idea that you could do so much more if only you could unlock the rest of your brain. This idea is in line with the message espoused by self-help books, which show you ways you can improve yourself.

For example, Lowell Thomas’s preface to Dale Carnegie’s popular book, How to Win Friends and Influence People, says that the average person “develops only 10 percent of his latent mental ability.” This statement, which is traced back to psychologist William James, refers to a person’s potential to achieve more rather than how much brain matter they used. Others have even said that Einstein explained his brilliance using the ten-percent myth, though these claims remain unfounded.

Another possible source of the myth lies in “silent” brain areas from older neuroscience research. For example, in the 1930s, neurosurgeon Wilder Penfield hooked electrodes to the exposed brains of his epilepsy patients while operating on them. He noticed that some brain areas caused his patients to experience various sensations, but that others seemed to experience nothing.

As technology evolved, researchers later found that these “silent” brain areas, which included the prefrontal lobes, did have functions after all.

Putting It All Together

Regardless of how or where the myth originated, it continues to pervade the cultural imagination despite an abundance of evidence showing that humans use their entire brain. However, the thought that you could become a genius or telekinetic superhuman by unlocking the rest of your brain is, quite admittedly, a tantalizing one.

Sources

Peru ends conversation of ‘roadless wilderness’ in its Amazon rainforests

Biodiversity reaches its zenith in south-east Peru. This vast wilderness of 2m square km of rainforests and savannahs is formed of the headwaters of three major river basins, the Juruá, Purús, and Madeira. Nowhere on Earth can you find more species of animals and plants than in this corner of the Amazon that rubs up against the feet of the towering Andean mountains. These forests are also home to a culturally diverse human population, many of whom still live in voluntary isolation from the rest of humanity.

In 2012 I spent a hectic few days in the exhausting Madre de Dios region, literally Spanish for “Mother of God”. I was there at the invitation of the Peruvian tourist board, which wanted to raise awareness of the region’s potential. In the lush lowland rainforests our team of ornithologists recorded more than 240 bird species in a few hours. These included the Rufous-fronted Antthrush, a near-mythical sighting among birders and one of a number of vertebrate species discovered by scientists there in the second half of the 20th century. It, and many others like it, are found nowhere else.

The end of the Mother of God

This part of Peru has long been cut off from the rest of the world in splendid roadless isolation. Globalisation has been knocking at the door for decades however, and it may now have a way in thanks to a development plan to facilitate transportation across the continent: The Initiative for the Integration of the Regional Infrastructure of South America (IIRSA). If seen to fruition it will effectively end road-free wilderness in the Amazon.

The Peruvian Congress recently approved a bill declaring it in the national interest to construct new roads in the Madre de Dios region. These spurs of the IIRSA’s continent-spanning Interoceanic Highway will include a road connecting the remote towns of Puerto Esperanza and Iñapari, which will cut through a mosaic of different protected areas.

Roads and rainforests are a bad combination. As colleagues and I found in our research, the direct impacts include roadkill or the immediate loss and isolation of habitat. For many rainforest animals, like the aforementioned Rufous-fronted Antthrush, roads are barriers for dispersal. Antthrushes are birds of the dark humid rainforest understorey, which shun the light and have limited powers of flight. They cannot move through landscapes subdivided by humans.

But we also found these direct problems are followed by even bigger indirect impacts. Roads allow access to the forest. An influx of money brings an influx of people involved in extractive activities. More logging and gold mining. As the big valuable trees are thinned out for lumber, the sun reaches the forest floor and the humidity is lost. In the next dry season fires will stalk through the forest. Even in areas spared from conversion to the cattle pastures which have already swamped the eastern Amazon, those forests left standing near roads are degraded shadows of their former selves. With their communities of plants and animals altered, antthrushes and other specialist rainforest species that depend on them will suffer.

These new roads, threatening the integrity of one of the most biologically and culturally diverse places on Earth, are justified as “strengthening the national identity” of the remote Purús region. They are also billed as an opportunity to reduce the cost of goods and services and hence welfare of those “trapped” in remote towns accessed only by river. It seems highly unlikely that the migration into the region of people and their pathogens, drugs, and material goods will help the identity of some of the world’s last remaining uncontacted indigenous tribes. By fragmenting their homelands, the roads will inevitably spark the sort of conflicts with loggers, hunters and narcotics traffickers that already plague the Brazilian Amazon.

Concerns about welfare and national identity can scarcely mask the reality. These regional integration plans are shaped by neoliberal imperatives to boost the global competitiveness of certain export sectors over all other infrastructure, and will see massive profits for large Brazilian multinationals and Chinese investors. Under IIRSA’s model of geographically uneven development any local short-term social and economic benefits will be undermined by disruption to crucial ecosystem services such as water balance or climate regulation.

This threatens the future viability of other sectors the government also wants to expand such as sustainable forestry and ecotourism. Not to mention regional biological and cultural diversity. The future of the western Amazon hangs on a knife edge.

When Milky Way Collided With a Dwarf Galaxy

When Milky Way Collided With a Dwarf Galaxy

As the Milky Way was growing, taking shape, and minding its own business around 10 billion years ago, it suffered a massive head-on collision with another, smaller galaxy. That cosmic cataclysm changed the Milky Way’s structure forever, shaping the thick spirals that spin out from the supermassive black hole at the galaxy’s core. Two new studies — one published earlier this month, another still under peer review — describe the evidence for this previously unnoticed event.

“This is a big step forward,” said Elena D’Onghia, an astrophysicist at the University of Wisconsin who is unaffiliated with the new research. “It’s interesting because we can finally see what the history of the Milky Way is.”

To uncover evidence of the collision so many eons later, astronomers have to work like galactic archaeologists, sifting through myriad sources of surviving information to piece together a story consistent with the available evidence. Both research teams relied on data from the European Space Agency’s Gaia space telescope, which has spent years gathering exceptionally rich biographies of millions of stars — not only their locations and motions, but for many, their brightnesses, temperatures, ages and composition as well. They essentially created high-resolution and multidimensional maps of the Milky Way and used these maps to find anomalous populations of old stars that appear to retain a memory of the long-ago collision. “The Gaia results really are allowing us to see things in the galaxy that we maybe suspected were there but haven’t seen,” said Kathryn Johnston, an astrophysicist at Columbia University.

Hints of a dramatic collision had been seen before, but the indications had been inconclusive. A distinct clump of unique stars would have been a giveaway that they’re interlopers from elsewhere, but no such evidence exists. The long-ago collision so thoroughly shook things up that the telltale stars have been strewn throughout the galaxy. “There’s debris everywhere,” said Vasily Belokurov, an astronomer at the University of Cambridge and a leader of one of the two teams. “You’re basically surrounded by that debris now.”

He and his team found a large number of stars that aren’t moving in step with the galaxy’s rotation. Instead, they move in radial orbits, streaming toward or away from the center of the galaxy. These stars are also rich in “metals” — the catch-all description astronomers give to any element heavier than hydrogen, helium or lithium. Metal-rich stars likely descend from many previous generations of stars. They’re the scions of stars from a long-ago galaxy that smacked into the Milky Way, their orbits still reflecting the odd trajectory of that cosmic agitator.

“If you throw a stone in a pond, those ripples last for awhile. In an analogous way, if you shake the Milky Way disk, even billions of years ago, it can take awhile for that response to settle down,” said Johnston.

Belokurov’s group also modeled different collision scenarios, as well as a possible quieter history without significant collisions. An impact of a small “dwarf” galaxy indeed could have deposited a cloud of stars like the ones seen today, they found. Their work was published onlineearlier this month in the Monthly Notices of the Royal Astronomical Society.

The other group, led by Amina Helmi, an astronomer at the University of Groningen in the Netherlands, based its study on a newer, larger data set from Gaia and included a more detailed analysis of the chemical properties of the stars. The abundance of iron, produced by supernova explosions, relative to elements like magnesium, generated by massive yet short-lived stars, yields clues about the history of the galaxy up until the present day. Helmi and her team used this data to conclude that the Milky Way’s inner region contains hints of debris from an ancient galactic impact. They named this ancient galaxy Gaia-Enceladus.

The collision could help resolve a longstanding question about the structure of the Milky Way.  The galaxy’s spiral disk of stars is actually made of two parts: a thinner, denser region encompassed by a thicker, more diffuse region. Astronomers aren’t sure how this thick disk came about. Perhaps those stars came from another galaxy, or they’re stars from the thin disk that have interacted with one another and migrated outward over time. Helmi and Belokurov’s work suggests that instead, the Gaia-Enceladus collision ejected thin-disk stars out into the thick disk. “If this collision happened to the young Milky Way, then it would damage the stellar disk, smash it up, and send stars up to high galactic heights,” Belokurov said.

The investigation continues. Both groups are uncertain about how big Gaia-Enceladus likely was and exactly when it fell into the Milky Way. And no one can say for sure how our galaxy’s disk got heated and puffed up into a thicker one. “We don’t understand how important the impact is alone, but now we have a culprit” that could have created the thick disk, Johnston said. “What would be really exciting would be to look carefully in the disk and trace back this event and see if we’re able to find a more direct effect that’s still going on, a leftover echo.”

It is science alone that can solve the problems of hunger and poverty?

It is science alone that can solve the problems of hunger and poverty?

In the early days of independent India, Prime Minister Jawaharlal Nehru said, “It is science alone that can solve the problems of hunger and poverty … of a rich country inhabited by starving people.” Would any head of state today voice this view?

A 2013 poll recorded that only 36% of Americans had “a lot” of trust that the information they get from scientists is accurate and reliable. High-profile leaders, especially on the political right, have increasingly chosen to undermine conclusions of scientific consensus. The flash-points tend to be the “troubled technologies” – those that seem to threaten our delicate relationship with nature – climate change, genetically modified organisms (GMOs), genetic therapy and geo-engineering.

The polarisation in these public debates constitutes an implicit threat to the quality of decisions that we must make if we are to ensure the future well-being of our planet and our species. When political colour trumps evidence-based science, we are in trouble.

Could it be that this increasingly dangerous ambivalence towards science in politics is related to our continued misgivings over its cultural role and status? “Science is not with us an object of contemplation,” French historian Jacques Barzun complained in 1964. This is still true. Science does not figure as much a cultural possession in our media and education as does music, theatre or art. Yet history tells us that curiosity about the natural world and our desire to conquer it are as old as any other aspect of human culture.

Ancient middle-eastern “wisdom literature”, the Epicureans’ atomic notions and Plato’s geometric concepts, the developing genre of the De Rerum Natura (On the Nature of Things) throughout the Middle Ages – these tell a long story in which modern science constitutes the current chapter rather than a discontinuous departure.

The perception that science lacks such cultural embedding, however, was highlighted in a recent study of public reaction to nanotechnologies in the European Union. The project identified strong “ancient narratives” at play in discussions ostensibly about technological risk. “Be careful what you wish for”, or “nature is sacred” were the underlying drivers of objection, ineffectually addressed by a scientific weighing of hazard analysis alone. Opponents were just talking past each other, for there was no scaffold of ancient narrative for science itself. We have forgotten what science is for.

To unearth a narrative of purpose beneath science, we cannot avoid drawing on religious heritage for at least anthropological and historical reasons. To restore faith in science, we cannot bypass the understanding of the relationship of faith with science. Here we are not helped by the current oppositional framing of the “science and religion” question, where the discussion seems to be dominated by the loudest voices rather than the most pressing questions.

The language we use can also colour our conclusions. “Science” originates from the Latin scio (I know) claiming very different values than the older name of “natural philosophy” with Greek connotations that substitute knowledge-claims for a “love of wisdom of nature”. Wisdom, like faith, is a word not commonly associated with science, but which might do much for our restorative task if it were. The most powerfully articulated stirrings of desire to comprehend nature are found, after all, in ancient literature on wisdom.

In a new book published this month, Faith and Wisdom in Science, I have tried to draw together the modern need for a cultural underpinning narrative for science that recognises its difficulties and uncertainties, with an exploration of ancient wisdom tradition. It examines, for example, current attempts to comprehend the science of randomness in granular media and chaos in juxtaposition with a scientist’s reading of the achingly beautiful nature poetry in the Book of Job.

It is salutary to be reminded that most Biblical nature literature and many creation stories are more concerned with cosmical loose ends, the chaos of flood and wind, than the neat and formalised account of Genesis, with its developed six-day structure and gracefully liturgical pattern. So rather than oppose theology and science, the book attempts to derive what a theology of science might bring to the cultural question of where science belongs in today’s society.

The conclusion of this exploration surprised me. The strong motif that emerges is the idea of reconciliation of a broken human relationship with nature. Science has the potential to replace ignorance and fear of a world that can harm us and that we also can harm, by a relationship of understanding and care, where the foolishness of thoughtless exploitation is replaced by the wisdom of engagement.

This is neither a “technical fix” nor a “withdrawal from the wild” – two equally unworkable alternatives criticised by French anthropologist Bruno Latour. His hunch is that religious material might point the way to a practical alternative begins to look well-founded. Nor is the story of science interpreted as the healing of a broken relationship confined to the political level – it has personal consequences too for the way human individuals live in a material world.

American author George Steiner once wrote, “Only art can go some way towards making accessible, towards waking into some measure of communicability, the sheer inhuman otherness of matter.” Perhaps science can do that, too. If it can, it would mean that science, far from irreconcilable with religion, is a profoundly religious activity itself.

What Is an Argument?

What Is an Argument?

When people create and critique arguments, it’s helpful to understand what an argument is and is not. Sometimes an argument is seen as a verbal fight, but that is not what is meant in these discussions. Sometimes a person thinks they are offering an argument when they are only providing assertions.

What Is an Argument?

Perhaps the simplest explanation of what an argument is comes from Monty Python’s “Argument Clinic” sketch:

  • An argument is a connected series of statements intended to establish a definite proposition. …an argument is an intellectual process… contradiction is just the automatic gainsaying of anything the other person says.

This may have been a comedy sketch, but it highlights a common misunderstanding: to offer an argument, you cannot simply make a claim or gainsay what others claim.

An argument is a deliberate attempt to move beyond just making an assertion. When offering an argument, you are offering a series of related statements which represent an attempt to support that assertion — to give others good reasons to believe that what you are asserting is true rather than false.

Here are examples of assertions:

1. Shakespeare wrote the play Hamlet.
2. The Civil War was caused by disagreements over slavery.
3. God exists.
4. Prostitution is immoral.

Sometimes you hear such statements referred to as propositions.

Technically speaking, a proposition is the informational content of any statement or assertion. To qualify as a proposition, a statement must be capable of being either true or false.

What Makes a Successful Argument?

The above represent positions people hold, but which others may disagree with. Merely making the above statements do not constitute an argument, no matter how often one repeats the assertions.

To create an argument, the person making the claims must offer further statements which, at least in theory, support the claims. If the claim is supported, the argument is successful; if the claim is not supported, the argument fails.

This is the purpose of an argument: to offer reasons and evidence for the purpose of establishing the truth value of a proposition, which can mean either establishing that the proposition is true or establishing that the proposition is false. If a series of statements does not do this, it isn’t an argument.

Three Parts of an Argument

Another aspect of understanding arguments is to examine the parts. An argument can be broken down into three major components: premises, inferences, and a conclusion.

Premises are statements of (assumed) fact which are supposed to set forth the reasons and/or evidence for believing a claim. The claim, in turn, is the conclusion: what you finish with at the end of an argument. When an argument is simple, you may just have a couple of premises and a conclusion:

1. Doctors earn a lot of money. (premise)
2. I want to earn a lot of money. (premise)
3. I should become a doctor. (conclusion)

Inferences are the reasoning parts of an argument.

Conclusions are a type of inference, but always the final inference. Usually, an argument will be complicated enough to require inferences linking the premises with the final conclusion:

1. Doctors earn a lot of money. (premise)
2. With a lot of money, a person can travel a lot. (premise)
3. Doctors can travel a lot. (inference, from 1 and 2)
4. I want to travel a lot. (premise)
5. I should become a doctor. (from 3 and 4)

Here we see two different types of claims which can occur in an argument. The first is a factual claim, and this purports to offer evidence. The first two premises above are factual claims and usually, not much time is spent on them — either they are true or they are not.

The second type is an inferential claim — it expresses the idea that some matter of fact is related to the sought-after conclusion.

This is the attempt to link the factual claim to the conclusion in such a way as to support the conclusion. The third statement above is an inferential claim because it infers from the previous two statements that doctors can travel a lot.

Without an inferential claim, there would be no clear connection between the premises and the conclusion. It is rare to have an argument where inferential claims play no role. Sometimes you will come across an argument where inferential claims are needed, but missing — you won’t be able to see the connection from factual claims to a conclusion and will have to ask for them.

Assuming such inferential claims really are there, you will be spending most of your time on them when evaluating and critiquing an argument. If the factual claims are true, it is with the inferences that an argument will stand or fall, and it is here where you will find fallacies committed.

Unfortunately, most arguments aren’t presented in such a logical and clear manner as the above examples, making them difficult to decipher sometimes. But every argument which really is an argument should be capable of being reformulated in such a manner. If you cannot do that, then it is reasonable to suspect that something is wrong.

How to Improve Your Skeptical Thinking

How to Improve Your Skeptical Thinking

It’s easy to say “be more skeptical” or “exercise better critical thinking,” but just how do you go about doing that? Where are you supposed to learn critical thinking? Learning skepticism isn’t like learning history — it’s not a set of facts, dates, or ideas. Skepticism is a process; critical thinking is something you do. The only way to learn skepticism and critical thinking is by doing them… but to do them, you have to learn them. How can you break out of this endless circle?

Learn the Basics: Logic, Arguments, Fallacies

Skepticism may be a process, but it’s a process that relies on certain principles about what constitutes good and bad reasoning. There’s no substitute for the basics, and if you think you already know all the basics, that’s probably a good sign that you need to review them.

Even professionals who work on logic for a living get things wrong! You don’t need to know as much as a professional, but there are so many different fallacies that can be used in so many different ways that there are bound to be some that you aren’t familiar with, not to mention ways those fallacies can be used that you haven’t seen yet.

Don’t assume you know it all; instead, assume you have a lot to learn and make it a point to regularly review the different ways fallacies can be used, how logical arguments are constructed, and so forth. People are always finding new ways to mangle arguments; you should keep abreast of what they are saying.

Practice the Basics

It’s not enough to simply read about the basics; you need to actively use what you learn as well. It’s like reading about a language in books but never using it — you’ll never nearly as good as a person who regularly practices using that language. The more you use logic and the principles of skepticism, the better you’ll do it.

Constructing logical arguments is one obvious and helpful way to achieve this, but an even better idea may be to evaluate the arguments of others because this can teach you both what to do and what not to do. Your newspaper’s editorial page is a great place to find new subject matter. It’s not just the letters to the editor but also the “professional” editorials which are often filled with terrible fallacies and basic flaws. If you can’t find several fallacies on any given day, you should look more closely.

Reflect: Think About What You’re Thinking

If you can get to the point where spot fallacies without having to think about it that’s great, but you can’t get into the habit of not thinking about what you’re doing. Quite the contrary in fact: one of the hallmarks of serious critical and skeptical thinking is that the skeptic reflects consciously and deliberately on their thinking, even their critical thinking. That’s the whole point.

Skepticism isn’t just about being skeptical of others, but also being able to train that skepticism on your ideas, opinions, inclinations, and conclusions. To do this, you need to be in the habit of reflecting on your thoughts. In some ways, this may be harder than learning about logic, but it produces rewards in many different areas.