Human responses to the color red are really interesting. In some people, it evokes a sense of unease, as if something menacing or risky is lurking. Think of the red traffic light that warns us that if we cross disaster may strike. Or, the flashing red lights you see in the rearview mirror when a police car signals you need to pull over. And then there is the red fire engine, wailing and rushing off to an unknown disaster somewhere.

Further, we often use the color red in language when we want to connote something negative:

  • a failing company is “in the red”
  • the enraged person “sees red”
  • banks redline loans to buyers that live in poor neighborhoods

Hillary Clinton in red power suit (Source: Wikipedia) 320 x 213

Others look more positively on the color. Advertising specialists and graphic designers tell us that red connotes a sense of power, of virility, of breaking down conventions:

  • The red flag of the revolutionary left,
  • the red garb of Catholic cardinals,
  • the red suits worn by female politicians for high-profile photo ops,
  • the red tie businessmen wear at important meetings.

When you boil it down, these varying uses of the color red are often meant to convey the same message: POWER.

Related:
A Brief but Powerful History of the Colors Purple and Blue

Physiological and psychological reactions to red

It has been shown that red radiation is more conducive to producing epileptic seizures than blue light. Conceivably, it could be because the red color is associated with danger and therefore increases an anxiety response to a certain degree, as measured by heart rate and EEG studies.

Psychological research has shown that subtle red color cues can inhibit cognitive performance, increase dominance in competitive interactions, and modulate people’s mating behavior. In many situations, red is associated with danger and perceptions of threat.

paper published in PLOS in 2015 delved into the question of the connection between red and the sensation of danger, using the design of a website. Specifically, the investigators tested, in two studies, whether red color increased risky behavior. In the first study, they found that including a red (vs. gray) headline in a web-based survey led users to behave in a more risk-averse way (i.e., to choose less risky options).

Likewise, in the second study, users chose a less risky strategy in an online game when the target stimulus was red rather than blue. Both studies provided comparable evidence that including the color red in online environments can decrease the likelihood of users taking online risks that could result in financial losses.

These findings are strengthened by the use of two different control colors, which support the conclusion that the observed differences in online risk-taking are a consequence of the color red and not the chosen control conditions. Overall, the results of both experiments support the view of red as an avoidance trigger.

Do you know? Memories Can Be Modified and Manipulated

How did it all begin?

Here is the amazing thing: At the beginning (and this is not a biblical fable), there was no color discrimination, only black and white. It makes sense, right? Night and day are the very basic and the strongest factors that control the rhythm of life, be it for plant, animals, the savannah, or Manhattan.

The first color to be registered in our ancestors’ brains was most likely red, the color of blood (connoting danger) and fire (power, both beneficial and destructive).

How do we know? Of course, there is no way to have definitive proof for such an assertion because we weren’t there at the time, but we do have a lot of supporting evidence.

-Linguistic evidence

Paul Mckay and Brent Berlin, both of UC Berkeley, devised an ingenious experiment to uncover the evolutionary progression of humans’ color discrimination. They proposed that every culture in history invented words for colors in the exact same order.

They reached their conclusion based on a simple color identification test, where 20 respondents identified 330 colored chips by name. If a language had six words, they were always black, white, red, green, yellow, and blue. If it had four terms, they were always black, white, red, and then either green or yellow. If it had only three, they were always black, white, and red, and so on. The Yele language of Papua-Mew Guinea has 5 basic color words, but they all describe shades of black, white, and red.

-Red in the ancient art

Altamira bison cave painting

Painting of a bison in the Altamira cave in Spain, dating to 40,000 years ago (photo credit Wikipedia)

Red is one of the first colors to be used by humans as art or body painting. In an archaeological site in South Africa, Pinnacle Point, ochre-colored iron oxide drawings were discovered dating back to 170,000 years ago, coinciding with the time that Homo sapiens transformed from archaic to its modern form.

So as soon as people felt the need to do more than merely exist, they expressed themselves in art painted on cave walls—in black, white, and red. There is no hint of any other color being used, although green, blue, and other pigments were abundant.

As late as 40,000 years ago, cave paintings still depict animals in black, white, and red. Imagine, 130,000 years of a world in black, white, and red.

-The wine-red sea

But wait, there is more! As late as the 12th century BC, Homer describes the Aegean sea in the Iliad as “the wine-red sea.” He does not use blue or green to describe the sea or anything else for that matter.

In the 20th century BC ancient Indian Mahabharata, or the 18th century BC Ugarit tablets, or the 8th century BC Bible, there is no mention of the color blue (the Bible actually mentions colors that were interpreted as blue, but modern research concludes that they were actually referring to purple).

What’s going on? Were they all color blind?

You may enjoy: The Surprising Ancient History of Drinking in Excess

What does neurobiology have to say about that?

In 1898, the psychiatrist W.H. R. Rivers went to the Torres Straits Islands, between New Guinea and Australia. There, he investigated the Islanders’ perception of colors. He was astonished to hear the elders describe the sky as black, and a child describing the color of the sky as dark as dirty water.

He and other anthropologists concluded that early humans and isolated cultures were not color blind. They saw all the colors that we see, but consider them as simply hues of white or black or red, not worth inventing a special word for. And, by the way, like the Himba people of Namibia, they paint their bodies with, you guessed it, red ochre.

Himba woman

Himba women paint their bodies with red pigment (Photo source: Wikipedia)

Neurobiologists believe that it is not just a simple case of nomenclature, the islanders indeed perceive the sky a bit darker than we do. When we get used to seeing two hues as different colors, language trains us to see them as different entities. The brain then exaggerates these differences, especially at the border areas between them.

Thus red, or blue for that matter, which we perceive as lighter and totally distinct from black, is in reality probably a bit darker and closer to black. In a sense, the “obvious” distinction between black and red or blue is a figment of our imagination.

And, consider this. Following brain trauma, in some cases, there is a temporary loss of color discrimination where everything is in black and white. The first color to come back is red, possibly because the neuronal circuits involved in perceiving black and white are the most ancient and the most developed, and next to them are the ones responsible for the perception of red.

Related:What Makes Humans Different from the Rest of the Animals

Color, language, and the brain

Guy Deutscher is an Israeli linguist and author of Through the Language Glass: Why the World Looks Different in Other Languages. In the book, he advances the argument that the basis of language is informed by the way we perceive and name colors. It was translated into 8 languages and was selected by the New York Times, The Economist, and the Financial Times as one of the best books published in 2010.

To extend his observations, we can add the effect that colors, through language, affected brain structure throughout human evolution. This is a remarkable example of how the environment, culture, and natural selection “colluded” to shape our brains.

As my granddaughters would say, “cool!”


First published on July 26, 2015. It has been reviewed and updated by the author.

This is a story about color. More specifically, it is the history of two particular colors: purple and blue. I think you will find it fascinating. But first, let’s start our journey with a story of what piqued my interest in the topic.

Purple in Peru

A few years ago, when my wife and I were traveling in Peru, we visited the village of Chinchero, high in the Andes. We stumbled on a weaving shop where you learned how the area’s alluring textiles were made.

We stood, watching five squatting Quechua Indian women. They were artists, weaving Andean llama, alpaca, and vicuña wool into dazzling fabrics famous for their vivid hues and striking designs.

As I watched them work their magic, I wondered, where did these vibrant pigments come from? We drew closer to see the details. The women placed a small heap of grayscale insects, called cochineal (pronounced co-chee-kneel), they had collected by hand from prickly pear cacti. One woman crushed the pile of insects with a pestle. Another poured some wood ashes on their pulverized bodies.

We gasped – the ashen powder turned red, then red-purple, and finally a radiant blue-purple.

weaver of Chinchero making dye for colors purple and blue

A weaver of Chinchero showing us how she makes dyes. (Photo by P.Salber)

I closed my eyes, recalling the wonder I felt in elementary school when the teacher demonstrated the litmus test. We had just witnessed an elaborate experiment carried out not by chemists in their laboratory but by people who are one with their environment and are living its most intimate secrets. Whether they understood the molecular basis of the change, as laboratory chemists would, was irrelevant.

How on earth did they figure this out?

As my kids would say: AWESOME! Envision the Quechua learning that these insects produce color. Crushing one of them between your fingers stains them a bright red. The bodies of the dried female insects contain 12-16% carminic acid which is a vivid shade of crimson.

But, how did they learn to combine different additives to the crushed dried bodies of these insects in order to create different shades of the original color? They likely experimented, as scientists do in their laboratories.

Wood ash and other alkaline substances increase the pH of the mixture to create purple. Small amounts of iron can also be used to transform the red to purple. Adding an acid, such as lemon juice, produces a bright scarlet. This brings me back to my wonder when I learned of the litmus test.

The surprising role of the colors blue and purple in history

  • The ancients knew how to make the colors purple and blue

    • As early as 3000 years ago, the ancient Phoenicians made three major discoveries:
        • They gave us the alphabet we are using today.
        • They discovered that by heating silicon oxide, found in unlimited quantities in the sands of the Mediterranean beaches, they could make glass.
        • And, by extracting the secretions of the seashell Bolinus brandaris (also called Murex brandaris) found on the beaches of the eastern Mediterranean, they could make a highly-prized purple dye. The dye did not fade with time but instead increased in brilliance with exposure to air and sunlight.

The dye was called Tyrian Purple, after the Phoenician port city of Tyre. They also extracted another dye, Royal Blue, from a closely related species.

As we’ll see later, the process of getting the blue dye was not straightforward and was very laborious. Couldn’t the ancients find an easier way to get blue?

As Baruch Sterman, a physicist in Israel, explains that our eyes can only see an object as blue when it absorbs red light. This is something few naturally occurring materials do.

Stones and plants were among the handful of naturally occurring blue materials in ancient times, including:

        • stones, including lapis lazuli from what is now Afghanistan
        • plants such as indigo that grow in warm climates like India and Africa
        • woad  (a plant of the cabbage family) that grows around the Mediterranean.

Ground-up lapis lazuli can be used to make paint, but not to dye textiles. Sadly, while indigo and woad dye fabric, they eventually fade.

Part of what made murex dye so valuable was that its colors remain brilliant. For example, 2,000-year-old pieces of murex-dyed wool found in caves near the Dead Sea are still vibrant today [REF]. Unlike the Andean women we observed in Chinchero, there are no Phoenicians around to explain how they did it.

Enter the archeologists 

An article in the journal, Archeology, describes new evidence of a robust dye industry that endured on the Mediterranean coast for millennia. A dig in Tel Shikmona, south of the city of Haifa in Israel, yielded dozens of pottery vessels and shards covered with purple and blue stains. They also unearthed industrial pools and mounds of murex shells.

Some aspects of the process of dye-making are currently unknown. However, we do know that it involved breaking open sea snail shells, removing the hypobranchial gland, and harvesting the clear fluid inside. In a process that took several days, this liquid was then heated and dissolved in an alkaline solution believed to have been made from urine or certain plants. This eventually produced a yellow fluid, into which yarn was dipped. Upon being exposed to light or oxygen, the yarn turned a rich shade of purple.

Extraction was tedious and inefficient. Thousands of Murex shells were required to dye just one Roman toga. The Phoenicians demanded a very high price for these precious goods. Their fabulous profits led to resentment. They were considered gougers and thieves. However, we now know that the traders were simply reacting to the reality of supply and demand. They had, after all, cornered the market.

The color purple was reserved for royalty

Because the purple stuff was so expensive, only kings and emperors could afford it. They allowed senators to have togas with a stripe of purple, but that was it. Commoners could only wear white, or earth tones like brown or green.

In fact, sumptuary laws were passed that regulated who could wear what. These laws were ostensibly designed to avoid conspicuous consumption. In reality, they fixed the demarcation between the aristocracy and the rest of us (assuming, dear reader, that you are not an aristocrat).

As a consequence (not completely unintended), these laws limited the demand for these sumptuous dresses, keeping the price more affordable for the nobles, and away from the hoi polloi.

Mosaic of Justinian in purple robe

This beautiful mosaic from Basilica San Vitale shows Justinian in all his purple glory. (Photo by Petar Milosevic CC BY SA-4.0 via Wikimedia Commons)

After the sack of Constantinople in 1204 by the crusaders whose stated mission was to liberate Jerusalem, rather than the reality of plundering the capital of the Christian Byzantine Empire, the impoverished Byzantine emperors could not afford the glorious purple dye anymore.

Later, medieval kings and fabulously rich Popes (who weren’t sworn to poverty at the start of their ecclesiastical careers) adorned themselves with Tyrian purple dresses.

The Church also controlled the message by paying its favorite artists de jour quite handsomely to tell the stories of the Bible through art. So only the artists close to the trough could afford the brilliant purple dye. And the message? Only the VIPs, such as Jesus, Mary, and some favorite kings merited Tyrian Purple. 

And so it went until the 18th century and the Age of Enlightenment when liberal and democratic ideals swept away the symbols of Church and State hierarchy. About this time, chemistry began producing brilliant pigments affordable by the new middle class.

More articles from the author:  
The Fascinating History of the Color Red
Science Shines a Light on the Evolution of Music and Language

The evolution of color recognition throughout history

While you may not remember all of the details of the Iliad and the Odyssey, you may recall Homer’s enigmatic description of the “wine-red sea.” Wine-red? Has anybody ever seen the sea in anything even remotely resembling this color?

Could the famous blue of the Aegean Sea, where the Homeric events took place, ever be other than brilliant blue? Literary scholars struggled mightily with this strange depiction. Some attempts were so convoluted as to be laughable, but none were persuasive.

To compound the mystery, the colors red, black, and white are mentioned many times in the ancient manuscripts. In the later ones, such as the Bible and the Koran, green and yellow are mentioned as well. In fact, biblical red is described in many of its hues (“argaman”—dark red, just like Homer’s sea, “shani”-pink, “siqrah”-deep red). And so is green: olive green, grass green.

But not a hint of blue. 

William Gladstone, a famous British prime minister at the beginning of the 20th century, was a classical scholar. He published a 1700-page study of Homer’s epic poetry. In a 30-page chapter, he describes Homer’s strange choice of colors – sheep wool and ox skin as purple, honey as green, horses and lions as red. The sky is studded with stars, wide, having an iron or copper hues. Not one mention of blue.


♦♦Love our content? SIGN UP FOR OUR NEWSLETTER HERE♦♦


 

What’s neurobiology got to do with it? 

Scientists believe that the historical mislabeling of colors (by today’s standards) is not just a simple case of nomenclature. When we get used to seeing two hues as different colors, language trains us to see them as different entities. And the brain then exaggerates these differences, especially at the border areas between them.

Everyone is a bit different in how they perceive and call out the name of colors.  I see red in many hues – dark red or light red. My wife sees peach and orange and strawberry as distinct colors

Blue, which we perceive as lighter and totally distinct from black, is in reality probably a bit darker and closer to black. The “obvious” distinction between black and blue is a figment of our imagination. Modern neurobiological research provides ample evidence for that.

Why were black, white, and red the first colors to be perceived by our forefathers?

The evolutionary explanation is quite straightforward: Ancient humans had to distinguish between night and day. Red is important for recognizing blood and danger. Even today, the color red causes an increase in skin galvanic response, which is a sign of tension, and alarm.

Green and yellow entered the vocabulary as the need to distinguish ripe fruit from unripe, grasses that are green from grasses that are wilting, etc. What is the need for naming the color blue? Blue fruits are not very common and the color of the sky is not vital for survival.

Stay with me. First, here is a totally unexpected phenomenon: language influencing brain function. But even more fascinating is the realization that the way we see the world is somewhat of an illusion. It is a product of a trick played on us by none other than our own brain.

This brings us full circle to the ancient Greeks and Plato’s allegory of the cave. He posited that reality is an illusion. It is like the shadows of cave dwellers cast on the walls of a cave by a fire at the cave’s opening. We, standing outside the cave, see the shadows only, not the real occupants.

Reality, as we see it, is illusory.

Mind-boggling.

How pigments are used in medicine

With the democratizing effect of chemistry-for-the-masses came another revolution: the Biology Revolution. It was an exhilarating time for people curious about the inner workings of living things:

      • The microscope allowed scientists to view cells for the first time.
      • The electron microscope allowed them to see the innards of the cell.
      • Pathologists could stain tissues, depending on their surface charge, with either a blue dye (hematoxylin) or a red one (eosin). 
      • To increase the staining specificity beyond just surface charges, researchers developed antibodies to specific cell types, like lymphocytes or muscle cells, or neurons, and bound them chemically to various dyes.

Now they could visualize exactly how the heart muscle is organized, how one lymphocyte type differs from another, and how neurons are organized in the brain.

As important as this pigment revolution was, it had a major shortcoming. It only showed the cells as static objects. In biology, nothing is static. Cells move within tissues and all around the body. Inside the cells, there is a constant flow of proteins and organelles performing their duties.

For many years, researchers could only speculate on what’s happening inside the cell, based on visual cues. But then a quantum jump occurred in the development of pigments that made tracking of cell components inside the cell possible.

The discovery of green fluorescent proteins wins a Nobel Prize

The discovery of green fluorescent protein (GFP) in jellyfish spawned such an impressive revolution in cell biology and medicine that its discoverers, Martin Chalfie, Osamu Shimomura, and Roger Y. Tsien were awarded the Nobel Prize in Physiology and Medicine in 2008.

Here are some short quotes from the Nobel committee: 

“To obtain such knowledge (of the dynamic behavior of cells), new experimental and conceptual tools were required. Now, at the beginning of the 21st century, we are witnessing the rapid development of such tools based on the green fluorescent protein (GFP) from the jellyfish Aequorea victoria, its siblings from other organisms, and engineered variants of members of the “GFP family” of proteins.

Indeed, no other recent discovery has had such a large impact on how experiments are carried out and interpreted in the biological sciences, as witnessed by the appearance of more than 20,000 publications involving GFP since 1992”.

To close the loop, a paper published in the May 2008 issue of Genetics announced the discovery of a new Purple Fluorescent Protein. Now we can track simultaneously many proteins and organelles as they course through the cell. Some stain green, some blue, some red, and yes –some stain a brilliant, majestic purple – a ballet in astounding colors.

The story continues: A new blue is invented

At the time of the discovery of YInMin blue, it had been more than 200 years since a new inorganic blue pigment was created. The last being the discovery of cobalt blue in 1802. Oregon State University materials science professor, Mas Subramanian, inadvertently created it while searching for inorganic materials that could be used for electronic devices.

He put a sample containing manganese, yttrium, and indium (thus the name) in a very hot furnace (more than 2,300 degrees Fahrenheit) and was surprised to find it turned a “brilliant, very intense blue.” Dr. Subramanian noted that this color was a “true” blue as opposed to many blues in nature that appear blue because of the way they reflect light. 

Also, because it is chemically derived as opposed to being an organic plant-based dye, this intense blue color should remain stable over time. This blue already has widespread applications in a number of industries, including commercial paints for buildings, fashion, art, and even cosmetics.  

Concluding thoughts

The history of the colors purple and blue goes beyond amazing. It is a metaphor for the eternal struggle between the haves and have-nots. Originally the pigment was so expensive so as to only be afforded by kings, emperors, and the church hierarchy.

These powerful people passed laws ostensibly to prevent conspicuous consumption. In reality, these sumptuary laws were designed to restrict competition for the pigment. Thus, ensuring lower prices for themselves.

With the dawning of the enlightenment and the empirical science of chemistry that it gave birth to, the pigment purple became affordable to the masses. These dual triumphs of democratization and the flourishing of technology resulted in the totally unforeseen explosion of knowledge applied to the understanding of our biology and the development of modern medicine.


First published 5/5/11.

I am going to write about the consequences of disbelieving in free will, but first I want to tell you a biblical story that has stayed etched in my memory to this day. It goes like this:

There was a traveler of the tribe of Levi and his concubine who came to the town of Gibeah. It is said to have been located southwest of Jerusalem in the territory of the tribe of Benjamin.

As they sat down to dine, they were attacked by the townspeople. He offered his concubine to the mob in order to prevent being assaulted himself. The concubine was raped all night by the mob.

The next morning, the man carried his murdered concubine to their hometown. He cut her body into twelve pieces and sent them to the twelve tribes of Israel.

The people, especially those of the tribe of Ephraim, upon hearing about the dastardly deed were outraged and proceeded to raze several Benjaminite towns, killing every man, woman, and child in them.

Was this really God’s will?

I was shaken by this story. The image of the man carrying his woman’s body, all alone, silent, grieving, probably crying quietly, tugged at this little boy’s heartstrings. “Why did the townspeople do it?” I asked my teacher. “They were bad people but they believed they were carrying out God’s will,” was the answer.

And why did the people of Ephraim kill every man, woman, and child? Because they believed they were meting out God’s punishment.

Of course, a young child cannot quite put his finger on the philosophical inconsistencies of the answer. But six decades later, I am still asking the same questions about religious zealots who rape, kill, and maim their own because they believe it is “God’s will.”

Don’t they have a will of their own? Why is it that religious zealots seem to me to be more prone to violent behavior than other groups?  And, why is their social world-view often tinged with cruelty toward their fellow human beings? Doesn’t “the good book” preach love and tolerance?

When God sanctions killing

Brad Bushman, a social psychologist at the University of Michigan in Ann Arbor, is the lead author of a study, “When God sanctions killing: effect of scriptural violence on aggression,” It was published in the March 2007 issue of Psychological Science. It’s a bit old but still a valuable read on this topic.

Bushman directed ~500 students to read the tale about the tribe of Ephraim in order to study the role of “higher authority” in the propagation of religious violence. For half the students he added another passage:

When the man returned home, his tribe prayed to God and asked what they should do. God commanded the tribe to “take arms against their brothers and chasten them before the Lord.”

Then, the students took part in an exercise designed to measure aggression. About half of the study participants were from Brigham Young University and almost all of them were religious Mormons. The other half were from the Free University in Amsterdam. Of the Dutch group, only 50% believed in God, and 27% in the Bible (astonishingly high percentages, for Europeans).

But for both groups, regardless of whether they lived in the U.S. or the Netherlands, or whether they believed in God or not, the trends were the same. Those who were told that God had sanctioned the violence between the Israelites were more likely to act aggressively in the subsequent exercise.

What does it mean?

First, what it doesn’t mean. One cannot conclude that religious people are more aggressive than non-religious people. But it does suggest is that people are more prone to aggression when they feel that it is sanctioned by some higher authority, be it God, or his clergy.

Later studies suggest that there is a deeper aspect to the story. In the February 2009 issue of Personality and Social Psychology Bulletin researchers from the University of Kentucky and Florida State University described a fascinating series of experiments in a paper titled, Prosocial Benefits of Feeling Free: Disbelief in Free Will Increases Aggression and Reduces Helpfulness.

As the authors stated, they started from the premise that “laypersons’ belief in free will may foster a sense of thoughtful reflection and willingness to exert energy, thereby promoting helpfulness and reducing aggression.

An obvious consequence of this assumption is that “disbelief in free will may make behavior more reliant on selfish, automatic impulses and therefore less socially desirable.

ADD_THIS_TEXT
 

Three studies tested the hypothesis that disbelief in free will would be linked with decreased helping and increased aggression.

  1. In the first experiment, they found that induced disbelief in free will reduced the willingness to help others.
  2. The second experiment 2 showed that chronic disbelief in free will was associated with reduced helping behavior.
  3. And lastly, experiment 3 found that participant-induced disbelief in free will caused participants to act more aggressively than others.

The authors concluded that “although the findings do not speak to the existence of free will, the current results suggest that disbelief in free will reduces helping and increases aggression.

More on the question of free will

The May 2011 issue of Psychological Science adds another contribution to the question of free will in an article by scientists from the University of Padua in Italy. It is titled, Inducing Disbelief in Free Will Alters Brain Correlates of Preconscious Motor Preparation: The Brain Minds Whether We Believe in Free Will or Not.

First, let’s understand what is meant by “preconscious motor preparation”. About 30 years ago, the neurophysiologist Benjamin Libet demonstrated that when a subject is hooked up to EEG electrodes and electrical activity is recorded when he is about to perform some voluntary activity (say, press a Y or an N on his computer keyboard in response to a question flashed on the screen), a few milliseconds before the activity is initiated there will be electrical activity called “readiness potential”.

Mind you, this is “preconscious” because it happens before the subject is even aware of what his answer going to be. Which gives rise to the question of whether this voluntary act was the product conscious action. Or was it predetermined by the brain before it even entered consciousness?

Is free will just an illusion?

The feeling of being in control of one’s own actions is a strong subjective experience. However, discoveries in psychology and neuroscience challenged the validity of this experience and suggest that free will is just an illusion.

This is an important question for the Church that grapples with issues of sin and free will. Or, or the Justice system that daily confronts issues of personal responsibility.

Even Libet was aware that the conclusion that free will does not exist needed much more direct evidence than his experiment suggested. Later experiments, notably in a pivotal paper by Schurger, Sitt and Dehane demonstrated that what Libet observed was not the brain making a decision. 

Rather it was a misinterpretation of random brain activity, akin to fluctuations of the weather or the stock market. The actual decision by the brain coincided exactly with the subjects pressing the computer key: 150 msec.

Does belief or disbelief in free will manifest in brain activity?

The Psychological Science paper didn’t deal with the question of whether free will exists or not. The question it asked was quite profound nonetheless: does belief or disbelief in free will have any manifestation in brain activity? More specifically, does it affect the readiness potential?

This section contains Amazon Affiliate links. We receive a small commission on the sale of this product. It helps us do our work.

Thirty subjects were presented with selected paragraphs from Francis Crick’s book Astonishing Hypothesis: The Scientific Search for the Soul.

Half the subjects read, among other paragraphs, one that stated that free will is an illusion. The other half did not read that paragraph. The subjects were then asked to press a computer mouse when a cursor flashed across the screen.

The result: the ones who read the passage questioning the existence of free will had a significantly reduced “readiness potential” as compared to the control group.

Whether the readiness potential is a manifestation of preconscious, predetermined instruction by the brain how we should act is a matter of debate. But predetermined or not, it is part of voluntary control. And the study shows that believing that there is no free will, that all is predetermined by the brain, has, in turn, an effect on the brain and how it functions.

What are the consequences of determinism?

The familiar but unfortunate consequences of fundamentalist belief that all is predetermined by a higher authority have been with us since biblical times. And they are here today. So when we ask “how can ostensibly religious people commit atrocities?” there are no simple answers, but here is an attempt at some explanations.

Neurobiology tells us that disbelieving in free will reduces the brain’s control of voluntary activity. In other words, it is much more laissez-faire, anything-goes brain.

From an evolutionary point of view, we act on two levels:

  1. the individual aggressive animal
  2. the social empathetic one.

The latter requires self-control, which requires the feeling of self-control, illusory, or not.

Remove the belief in the existence of self-control and you removed the linchpin of civil society. Indoctrinate a person that God or your party leader (Hitler, Stalin, Pol Pot, and others come to mind) predetermine all, that there is no such thing as free will, that there is no point in thinking for yourself because such a thing doesn’t exist, it is all illusory, and you create monsters of historical proportions.

From a psychological point of view, we as social animals are programmed to behave altruistically, exert self-control in our interactions with other members of our society. So when the aggressor in us asserts itself despite our religious or humanistic instincts it creates an unbearable cognitive dissonance.

We resolve it by denying belief in free will; God has willed us to do it, who am I to defy Him? As the comedian, Flip Wilson, kept making fun of this notion: “the devil made me do it”. Turns out, this is far from a laughing matter.

The bottom line

So regardless of whether free will is illusory or not (and recent neuroscience says that it is not illusory at all) – it is important that we believe it does exist. History taught us that the consequences of disbelieving in free will, of acceptance of determinism, are just too awful.


First published on May 30, 2011. Reviewed and updated by the author 06/21/17 and again on 8/12/20.

A while back, I listened to a fascinating interview on NPR with Paul Raeburn talking about science and fatherhood. He explored the topic in depth his book, Do Fathers Matter? What Science is Telling Us About the Parent We’ve Overlooked.

Here is a smorgasbord of amazing information served in the book:

  • older men having babies can be a cause of autism
  • men can get morning sickness during the pregnancy
  • men can get postpartum depression
  • men’s attitude toward the unborn baby can affect the baby’s personality throughout his life (mechanism unknown)

And, just as in the mother, the expectant’s father oxytocin and prolactin—who knew males even had these hormones—rises and stays up during the newborn’s infancy.

One fascinating bit of data the author shared during the interview shines some light on our social attitudes, including us scientists. In his search of the science database, PubMed, for the term “motherhood”, Raeburn found over 200,000 citations, but for fatherhood, about 20,000. That’s a ratio of 10:1.

Does that mean the father is not considered as consequential to the baby’s well-being as the mother? I can already hear the cries of protest rising from the aggrieved fathers who view themselves as great dads. Yet, isn’t it curious that all the blurbs singing the book’s praises on the full-page ad in the magazine, Scientific American Mind (July/August, 2014), were written by women?

More content by this author:
What Do Women Really Want When it Comes to Men?
Can You Believe in Science and Still Vote for Trump?
Lecithin Supplements: Understanding the Risks and Benefits

Fatherhood and science

Science actually tells us quite a bit about fatherhood. An article in PNAS (Proceedings of the National Academy of Sciences) by Israeli scientists examined the brains of fathers (not to worry, they used imaging techniques). What they found is quite fascinating.

Before we delve into their findings, let’s deal with a methodological issue. How do you control for the presence of a mother in the triangle of baby/father/mother? Whatever you may find occurring in the father’s brain, it would be open to the criticism that “obviously, the mother’s influence is not accounted for.

Well, the ingenious solution was to measure brain oxytocin and parenting behavior in 3 groups: primary caregiving mothers, secondary caregiving fathers, and primary caregiving homosexual fathers raising infants without maternal involvement.

Brain structures for caregiving

The study revealed that parenting implemented a global “parental caregiving” neural network that was, by and large, consistent across parents. This “caregiving neural network” integrated the functioning of two systems.

  • The emotional processing network

This network includes subcortical and paralimbic structures associated with vigilance, salience, reward, and motivation.

  • The “mentalizing” network

The mentalizing network involves the frontopolar-medial-prefrontal and temporoparietal circuits of the brain. It is implicated in social understanding and cognitive empathy.

These two networks work in concert to imbue infant care with emotional salience, attune with the infant state, and plan adequate parenting.

What did the study show?

Primary caregiving mothers showed greater activation in emotion-processing structures. However, secondary-caregiving fathers displayed greater activation in cortical circuits, associated with oxytocin and parenting.

Primary caregiving fathers (these are the homosexual fathers) exhibited high amygdala activation similar to primary caregiving mothers. They also showed high activation of the superior temporal sulcus (STS) comparable to secondary-caregiving fathers and functional connectivity between the amygdala and STS.

What functions does the STS serve? 

What functions does the STS serve? It is involved in the perception of where others are gazing (joint attention). Thus, it is important in determining where others’ emotions are being directed. It is also involved in the perception of biological motion (as opposed to the motion of inanimate matter).

In individuals without autism, the superior temporal sulcus also activates when hearing human voices. Among all fathers, time spent in direct childcare was linked with the degree of amygdala-STS connectivity. This dose-response relationship lends a great deal of validity to the finding.

The take-home lesson is that fathers’ brains are malleable, and the same neural pathways are activated in infant caregiving as those of mothers.

Fatherhood and science: Is parenting good for you?

Most anthropoid primates (humans, chimpanzees, gorillas, baboons, gibbons) are slow to develop. Their offspring are mostly single births and the inter-birth intervals are long.

To maintain a stable population, parents must live long enough to sustain the serial production of a sufficient number of young to replace themselves while allowing for the death of offspring before they can reproduce.

A study published in PNAS looked into this issue. The results confirm what we knew all along. In species where the mother is the primary caregiver, she lives longer than the male. In species in which the males participate at least equally in offspring-rearing, they live as long as the female.

Additional reading: Why Your Teen is Out of His Mind

So, what about humans?

Human data from the Swedish population from three historical periods indicate a female survival advantage going back to 1780, the earliest records available.

The female advantage is evident throughout more than two centuries in spite of large differences in mortality rates. Similar female advantages were recorded in the earliest data from England and France in the 19th century.

A female survival advantage has also been found among adults in the Ache, a well-studied hunter-gatherer population living in the forests of eastern Paraguay. And, importantly, the female advantage has been present in most countries throughout the world in the 20th century.

These data strongly suggest that the survival advantage in human females has deep biological roots. Although human fathers have a significant role, human mothers generally bear the greater burden in caring for their offspring.

The downside of male parenting

Before you grab the baby from mom’s arms in the vain hope of increasing your lifespan, consider these studies. One study showed that fathers reporting three or more hours of daily childcare had lower testosterone at follow-up compared with fathers not involved in care. I know, I know, it makes perfect evolutionary sense.

You don’t (or rather the mother doesn’t) want a horny father to beget (this is a biblical euphemism) a new baby while the present one needs so much care. Still, low testosterone is so…uncool.

Dr. Anna Manchin, an evolutionary anthropologist at Oxford University, shines a light on another aspect of the evolutionary advantage of the testosterone decline in newly minted dads. In an article in the NYT, she cites many studies showing a drop in testosterone levels just before and just after the birth of a man’s first child.

What causes this drop is not clear, but the hormonal and neurobiological consequences are evident. Lower testosterone means a higher ratio of estrogen/testosterone. That, in turn, leads to an increase in the bonding hormones oxytocin and dopamine.

It is noteworthy that the brain’s reward system releases dopamine. This is the same hormone and brain structure involved in addictive behaviors. The bottom line: men like it and are likely to repeat it. 

The difference between mother and father’s brain response

There is a difference though between mother and father when it comes to the brain’s response to this hormonal change. An MRI study from Israel’s Bar Ilan University and Hebrew University showed an interesting structural difference between new moms and dads.

The researchers found increases in midbrain structures that are involved in care, nurturing, and risk avoidance in the moms’ brains. In dads, the increase is size and thickness occurred in the outside layer of the brain, or the neocortex. This is the part of the brain where conscious cognitive tasks reside, such as thought, goal orientation, planning and problem-solving. As the authors point out, evolution took care of both aspects of child-rearing.

Don’t worry, dads, the changes are reversible

If your worst fears of this evolutionary imperative are confirmed in another study titled, “Testicular volume is inversely correlated with nurturing-related brain activity in human fathers” I have a comforting reassurance: Your testicles are going to shrink when you take care of your baby. But don’t worry my fellow fathers, it’s not quite as bad as it sounds. The jewels regain their previous volume once the child-rearing period is over. And you are back to the races.

Happy Father’s Day!


This post was originally published Father’s Day, 2015. It is reviewed, updated, and republished every Father’s Day since. Enjoy!

We used to have a dog, Hubert Beagle-Basset, who suffered from a severe case of separation anxiety. Whenever we came back home, even if we had only been gone for a short period of time, he used to run joyous circles around the dining room table—we used to call them victory laps (My humans came home! My humans came home!)

Our present dog Sherman, a big black lab, suffered from depression when we got him from the San Francisco SPCA shelter. He had already been there twice. We tried to let him know that he had hit the jackpot coming to live with us. But he was so insecure that he didn’t wag his tail for a whole year.

He’s fine now—he wags his tail a lot—but he is still pretty “weird.” If he isn’t out for a walk or eating, he is hiding out in our closet. He loves the small dark space and he loves being alone.

Dog psychiatry

I am pretty sure Sherman would qualify for the canine diagnosis of an introverted personality. Ask any veterinary psychiatrist (yes, they exist) and they will tell you that dogs suffer from almost every psychiatric disorder that afflicts humans—all except one: schizophrenia.

If we played the quasi-philosophical game of “what defines us as human?” my first choice would be our capacity to hope and the ability to plan for the medium and distant future. But a close second is the uniquely human malady of schizophrenia.

Now, I am not being flippant. No other species is “endowed” with this psychiatric disorder. Interestingly, as I have written before, schizophrenia is associated with creative genius, a characteristic that is also uniquely human. What’s the connection, if any?

Related content: Does Your Dog Have Personality? But of Course!

Genetics of schizophrenia

If you wanted to identify all the genes that are somehow associated with schizophrenia, you would do the obvious. That is, compare the whole genome of people with schizophrenia to that of people without the disorder. That sounds easy, but actually it’s quite an undertaking.

Despite the huge strides made in DNA sequencing, the accurate sequencing of a whole human genome is still far from trivial. Also, you would need to sequence thousands of individuals both with and without the disorder.

Why the need for massive numbers? Because the genome of every individual is, well, uniquely individual. This is because we are all continuously subject to random mutations, the vast majority of which are ‘neutral’, neither beneficial nor deleterious.

Furthermore, we all live in different environments. And as it turns out, the environment can exert its influence on our genes by inducing chemical changes, called epigenetic changes, that affect the expression of specific genes.

So, to get to the “core genome,” we have to cancel out all the “noise” in any individual genome. This can only be done by determining the sequence of thousands of genomes.

No single scientist could possibly accomplish such an undertaking. It would require the collaboration of hundreds of laboratories around the world. But, indeed this was done.

The Schizophrenia Working Group study

A study, published in Nature, is the result of a collaboration among more than 300 scientists from 35 countries. This collaboration is called the Schizophrenia Working Group of the Psychiatric Genomics Consortium. The researchers compared the whole genomes of nearly 37,000 people with schizophrenia with more than 113,000 people without the disorder. And the results?

They found 128 gene variants associated with schizophrenia, in 108 distinct locations in the human genome. The vast majority of them had never before been linked to the disorder.

Bear in mind, a study like that cannot identify specific genes that cause the disease. But, it does provide a list of genes that will become the subject of detailed investigations as to their role in the causation of the disease. But with such a long list of genes, where do you start?

An evolutionary approach to the genetics of schizophrenia 

Why is schizophrenia uniquely human? Researchers at Mount Sinai Medical School came up with a brilliant evolutionary approach to the question.

Schizophrenia is relatively prevalent in humans despite being detrimental. The condition affects over 1% of adults. So it must be associated with something that confers a selective advantage. And that “something” must be uniquely human.

Indeed, there are segments of our genome that are called human accelerated regions, or HARs. HARs are short stretches of DNA that while conserved in other species, underwent rapid evolution in humans following our split with chimpanzees. This is presumably because they provided some benefits specific to our species.

What do HARs do?

The genes found in those HAR stretches don’t code for proteins, instead, they regulate other genes in their vicinity. Could some schizophrenia-associated genes happen to be in the neighborhood of some HARs?

To find out, Dudley and colleagues used data culled from the Psychiatric Genomics Consortium that I mentioned above. They first assessed whether schizophrenia-related genes sit close to HARs along the human genome—closer than would be expected by chance.

It turns out they do, suggesting that HARs may play a role in regulating genes contributing to schizophrenia. And, what makes those genes even more interesting is that they were found to be under stronger evolutionary selective pressure compared with other schizophrenia genes. This implies that the human variants of these genes are beneficial to us in some way despite harboring schizophrenia risk.

Beneficial HARs

To help understand what these benefits might be, Dudley’s group then turned to gene expression profiles. Gene sequencing is important, but it can give us only their structure, not their function. So the most we could say about them is that they are associated with the disease.

To find a causal connection we need to know the function of the gene when it is turned on and off, and in what tissues. That’s what gene profiling does.

Dudley’s group found that HAR-associated schizophrenia genes are found in regions of the genome that influence other genes expressed in the prefrontal cortex (PFC). Inputs into this area arrive from the rest of the brain and are integrated to carry out higher cognitive functions that we associate with being human, such as judgment, planning, decision making, and the like.

Many of those inputs are mediated by the neurotransmitter dopamine. Others are mediated by acetylcholine, and norepinephrine, and glutamate. But all of them are excitatory. They deliver a positive signal.

Nothing in biology is unregulated

Now, nothing in biology is left unchecked or unregulated. Too much of a good thing can be highly disruptive to the stability of the system. Just imagine if a whole cacophony of signals assaulted the PFC. Ideas rushing in uncensored, images flooding in unfiltered, voices unrelentingly filling our consciousness—we would go crazy.

GABA in relation to schizophrenia

Gabaergic Neurons | by Source (WP:NFCC#4) | via Wikipedia | Fair use

To prevent this dismal state of affairs, we need an inhibitory system, a yang force to counteract the yin, if you will.

Thankfully, we have such a system. There are neurons that secrete an inhibitory neurotransmitter called GABA, which tamps down the cacophony of the various signals and maintains our sanity.

So what did the gene profiling of those HAR-associated genes find? They found that they are involved in various essential human neurological functions within the PFC, including the synaptic transmission of the neurotransmitter GABA.

Not surprisingly, GABA’s impaired transmission is thought to be involved in schizophrenia. If GABA malfunctions, dopamine runs wild, contributing to the hallucinations, delusions, and disorganized thinking common to psychosis. In other words, the schizophrenic brain lacks restraint.

Schizophrenia: It’s all about balance

Very few things in biology are all-or-none, like a light switch. They are more like a rheostat, dimming or brightening the light. In biology, we also refer to it as a dose-response. If you have a strong stimulus, you get an appropriately strong response.

So would it be much of a stretch if, neurobiologically speaking, creative geniuses may have a hyper-stimulated dopamine system, or alternatively, an underactive GABA system?

If so, it could go a long way toward understanding their almost universal description of ideas flooding in, of visualizing sounds, of hearing conversations in their heads? And how far is that from crossing the threshold into the incapacitating pathology that we label schizophrenia?

Psychiatric disorders and imbalance

Of course, we still don’t know for a fact that all this really happens in the brains of creative people. But one thing does seem clear, many psychiatric conditions, in addition to schizophrenia, may be related to the imbalance of signals reaching the PFC.

Paranoid ideation is closely related to schizophrenia. And, OCD, despite its frequent portrayal as a behavioral quirk, is a vicious and debilitating mental illness, with some similarities to the experiences of schizophrenia.

People with OCD can have some of the same dark ideas, thoughts, and images as someone with schizophrenia, but the person with OCD is fully aware that they generate the thoughts themselves.

The bottom line: Why dogs don’t get schizophrenia

The studies we cited reveal how wonderfully complex the human brain is, and how exceptional the human species is. They also make it is increasingly clear, that schizophrenia and its associated psychiatric disorders, are part and parcel of us becoming what we are today. They are the price we pay for our wonderfully crafted, uniquely human brain.

More by the author:
Age-Related Memory Loss and What Can You Do About It?
Want to Know Why You Procrastinate?

**Love our content? Want more stories about Psychiatry, Psychology, and Human Behavior?  SIGN UP FOR OUR WEEKLY NEWSLETTER HERE**


This post was first published on 09/20/15. It has been reviewed and updated by the author for republication on 3/22/2020  

I was having coffee with a friend a while back and somehow the conversation wandered into how smart our dogs are. Like proud parents, we regaled each other with stories about our respective dogs’ personalities. I asked, rhetorically, whether dogs really have a personality. Like most dog owners, we confidently answered was “Of course!”

But the easy answer bothered me. The fact that millions of dog owners believe it to be a fact doesn’t make it so. Strictly speaking, that simply makes it millions of anecdotes. And anecdotes, however numerous, do not constitute proof. At best it amounts to a bunch of observations by loving owners, and thus, it is inherently highly biased.

What is personality?

Psychologists who study this subject describe five components that together make up a personality.

1.  Openness to experience

Openness to experience includes several dimensions. These include characteristics such as

      • active imagination (fantasy)
      • aesthetic sensitivity
      • attentiveness to inner feelings
      • preference for variety
      • intellectual curiosity

2.  Conscientiousness

This trait implies being thorough and careful. It is manifest by a desire to complete a task and do it well.

3.  Extraversion and introversion

Extraversion tends to be manifested in outgoing, talkative, energetic behavior, whereas introversion is manifested in more reserved and solitary behavior.

4.  Agreeableness

This personality trait manifests itself in individual behavioral characteristics that are perceived as kind, sympathetic, cooperative, warm and considerate.

5.  Neuroticism

This trait is characterized by anxiety, fear, moodiness, worry, envy, frustration, jealousy, and loneliness.

Each of these traits has a wide spectrum. Individuals can fall on one extreme of a trait, such as openness to experience or in the middle of the spectrum of agreeableness. They could also perhaps be on the lower scale of extraversion, such as being a socially awkward introverted loner. I know many scientists who fit this pattern. But what about dogs?

Related contact: Why Dogs Aren’t Schizophrenic?

Dog personality

Research on dog personality identified several dimensions to their personalities. Here are some:

  1. Reactivity (approach or avoidance of new objects, increased activity in novel situations)
  2. Fearfulness (shaking, avoiding novel situations)
  3. Activity
  4. Sociability (initiating friendly interactions with people and other dogs)
  5. Responsiveness to training (working with people, learning quickly)
  6. Submissiveness
  7. Aggression

Please note a basic fact. Dogs do have a personality. It can be described in very specific terms, just like those of humans. Further, these traits even have their equivalents in human personalities.

For instance, reactivity and fearfulness are features of human openness to experience. Submissiveness and aggression are components of human agreeableness. Sociability is a manifestation of extraversion-introversion in humans.

Dog behavior has been shaped by millennia of contact with humans

Is this surprising? I think not. Dog behavior has been shaped by millennia of contact with humans. As a result of this physical and social evolution, dogs, more than any other species, have acquired the ability to understand and communicate with humans. They are uniquely attuned to our behaviors.

And just like in humans, their intelligence is defined by their ability to perceive information and retain it as knowledge for applying to solve problems. Dogs have been shown to learn by inference, or by prior experience. Just like we do.

Related content: What Makes Humans Exceptional? Our Neurobiology

Anecdotally, my deceased beagle-basset, Hubert, was a master of deception. He would steal food and bury the wrappings so as not to leave evidence of the crime. This implies that dogs have what psychologists call a theory of mind, namely the ability to figure out what we think and feel. Guess what? This is called empathy. Can you think of a more human trait?

This is all “psychology”. Is there any “hard” evidence”?

Until recently the answer would have been negative. But a fascinating paper in the PNAS goes a long way into explaining the intriguing process of human and dog co-evolution. It also takes the field into harder science.

But before you go on, pause and take a long look at the dog in the photo above. The thing that strikes you the most is the wide-open eyes, evoking the feeling of an intelligent gaze, and a sense of sadness.

I have never looked into the eyes of a wolf, but I doubt that it would arouse the same feelings in me. So how did dogs accomplish this feat?

Juliane Kaminsky and her colleagues looked the dogs in the eyes, so to speak. What they found is that the muscle that raises the inner eyebrow of the dog (levator anguli oculi medialis) is highly developed in dogs but not at all in wolves.

Furthermore, behavioral data, collected from dogs and wolves, show that dogs produce the eyebrow movement significantly more often and with higher intensity than wolves do. Further, the highest-intensity movements are produced exclusively by dogs. 

So how does that have anything to do with the bonding between dog and humans? Because this very muscle is also highly developed in humans. Not only that but it is already fully developed in newborns and infants. What for?

This movement accompanies an expression that humans produce when sad, which is useful not only in adult facial expression but especially in infants who have limited means of expressing their needs and of evoking a nurturing response.  It is understandable then why our dogs evoke a similar response in us.

The co-evolution of humans and dogs has a downside.

Unlike dogs, untrained wolves don’t understand our gestures (like pointing), they don’t stare lovingly into our eyes, and, of course, they don’t play fetch. But a dog’s highly sociable nature carries a cost. According to a 2015 study, it makes them less able to solve problems on their own.

An experimenter (the dog’s owner in the case of the pets) called each animal and allowed it to sniff a bit of sausage. He or she next placed the meat inside a clear plastic container and snapped on a lid from which a short piece of rope extended. The dog or wolf had only to pull on the rope while holding down the box to get the treat.

The results show that even when faced with a puzzle that is easily solved, wolves and dogs use different strategies. The wolves were bent on figuring out the solution by themselves. But the dogs generally made no effort to work on the puzzle until someone encouraged them. Even then they were oddly unsuccessful.

Well, maybe dogs are not mechanically inclined. But what about their understanding of the human language? Wolves may be able to figure out how to open a plastic container. However, you can talk to them until you get blue in the face and they still won’t understand a word you said. They may even get irritated with you for bugging them. And then you better watch out!

How dogs understand language

When we listen to speech, we can separately analyze lexicon (words and their meaning) and intonation (the emotional content of a sound). Our brain then integrates this information into a unified content.

A New York Times article reported on an fMRI study done in Hungary to determine how dogs perceive human speech. Dogs were put in an MRI machine (How did they get the dogs to sit still? This, in itself, is a feat).

A trainer then said words of praise, such as “good dog”, and neutral words, such as “however”. Positive and neutral tones were used for both the praise and the neutral words.

Unsurprisingly for us dog owners, parts of the dogs’ left hemisphere reacted to the words meaning and parts of the right hemisphere to intonation. This is just like our brains.

Further, a bonus finding was that only words of praise said in a positive tone made the reward system of the dog’s brain light up. My guess is that it’s the positive tone that makes the difference. My dog would always wag his tail if I told him, in a loving voice, that he is a dummy.

So, do dogs have a personality?

Of course, they do. It is just a little different from ours. But do they have a language? Not really, except for some rudimentary barks that convey simple messages, like “let’s go for a walk” or “I am hungry”. But they do understand language. And they can read our body language as well or better than we do.

***

Love our content? Want more stories on Personality, Language, and Neurobiology?  SIGN UP FOR OUR WEEKLY NEWSLETTER HERE.


This was first published on October 7, 2016. It has been reviewed and updated by the author for republication on September 25, 2019.

A while back, Nature magazine published a very interesting article about honeybees entitled “Insights into Social Insects from the Genome of the Honeybee Apis mellifera.” The article is the product of a remarkable collaboration of 90 research institutions around the world, involving hundreds of scientists. Why should so many scientists care enough to devote a significant part of their careers to this enterprise? And why should we care about what they learned? It’s because we can learn a lot about ourselves from the genome of honeybees.

First, a bit about honeybees

The honeybee has always fascinated humans. Ancient Egyptians considered it (and the dung beetle) as deities. Apparently, this was because of their industry and seeming intelligence.

The biblical Israelites honored some of their daughters with the name Dvora, which means “bee”. The name is still common in present-day Israel. In English-speaking countries, the bee name has been changed to Debora.

Just like the Ancients, the Mormons have incorporated the bee into their belief system. For them, the bee is a symbol of industry, perseverance, and intelligence.  And, in common parlance, we say that hard-working people are “busy as bees.” And so over and over again, we find that humans beings are fascinated by and admire the industry of bees.

Bee science

When I was a high school student, I vividly remember reading about Karl von Frisch’s work decoding the language of the bee dance. Through a “waggle dance”, they communicate to each other the location, direction, and distance of food sources. They even maximize the efficiency of the pollen and nectar collection through two other types of dances:

  • a “shaking dance” to recruit more foragers
  • and a “tremble dance” to draft more bees to handle the food inside the hive.

Prof E.O. Wilson of Harvard, an eminent biologist and the “father” of the field of Sociobiology, quotes Karl von Frisch in his commentary on the Nature article:

“The life of bees is like a magic well. The more you draw from it, the more there is to draw.”

The genome of honeybees

There are many aspects of honeybee genome work that are of purely scientific interest. One of these is the similarities and differences of the honeybee genome as compared with the genomes of the fruit fly (Drosophila) and the malaria-bearing mosquito (Anopheles). Another is the bee’s origin in Africa.

But several aspects are of great interest to us non-experts. Take for instance the relationship between energy metabolism and longevity. It is well known that the hormone insulin and another protein called insulin-like growth factor (IGF-1) bind to the same receptor on the cell surface – the insulin receptor (IR).

Once either one of these molecules binds to the receptor, a whole cascade of signals is propagated, through specific pathways, to the DNA in the cell nucleus. Some genes are activated and some are silenced, but the common denominator to all of them is that they are involved in the regulation of the most critical functions of our lives, such as:

  • energy metabolism
  • fertility
  • aging

These are just a few of the many important biological processes regulated by insulin, IGF, and the insulin receptor.

Energy balance, reproduction, and lifespan

Why is this so important to understand? It is because we now understand the code that will allow us to decipher the relationship between food intake and aging. We have known for a long time that obesity is linked to a shortened lifespan. It is also known that the higher the reproductive capacity of a species, the shorter its lifespan.

But these relationships are turned upside down when it comes to certain bees. Queen bees are gluttons. They are constantly being pampered and fed by specialized worker bees. The worker bees make sure the queen of the hive eats a lot and doesn’t exercise too much. That way she can devote all her energies to one task: laying eggs. And this she does prodigiously, usually laying about 1500 eggs per day!

Yet, despite the gluttonous lifestyle of the queen bee, she has a lifespan of 1-2 years. This is quite remarkable when compared to worker bees whose lifespan is only 1-2 months. If we assume that humans have a mean lifespan of about 80 years, the queen bees extended lifespan would be equivalent to some of us living 800 years. Methuselah incarnate!

Related content: What Helps Ensure Cooperation in Diverse Societies?

So, how did the queen bee manage to upend the seemingly iron-clad relationships between energy balance, reproductive capacity, and lifespan? If we could understand these genetic tricks that allow this to occur, it could have direct implications for human energy metabolism and longevity. Perhaps, if we could figure out how to manipulate human genes in the same way as the queen bee, we could finally be able to have our cake and eat it too – and still enjoy a long life.

The peculiar nature of honeybee society

There is another aspect to the bee genome that is a bit more somber but absolutely fascinating. Bee society is highly organized in a caste system. The Queen is at the top (Hail to the Queen!).

The male bees, called drones, are next in line. Their only function is to fertilize the queen. After performing their duty, they are killed or die on their own. At the bottom of the caste system are the worker bees. They are sterile females that divide among themselves all the tasks required to maintain the hive.

How is this amazing feat of social control of beehives achieved? It turns out that it is controlled chemically. First of all, the Queen maintains her status as an exalted egg layer by keeping the other female bees sterile. She does this by secreting chemicals called pheromones. These pheromones act to keep the bees that constantly lick her sterile.

The caste system is maintained through the action of another chemical, the “royal jelly”. This jelly is secreted from glands in the heads of adult worker bees. It serves as food for the rest of the brood. The amount of royal jelly fed to different bees determines the function of the other bees in the hives. If you get a lot of royal jelly, for example, you may become a guard bee. If you get less, you may become a hive cleaner, and so on.

1984 and beyond

Move over George Orwell, the honeybee created a “1984 society” 300 million years ago! This chemically-controlled highly organized society, formed by what E.O. Wilson aptly called “revolution at the genomic level.” It should give us pause.

Could we also be susceptible to chemical manipulation? There is some evidence that this is the case, at least to some extent. Just think of the behaviors we are describing when we say an adolescent human male is being controlled by his “raging hormones”.

Will the nascent science of neurobiology allow scientists to find new ways of controlling human behavior through chemical and non-chemical means in the future? Who knows? These are pretty scary thoughts. But, I believe that the beauty of the bee genome work is that it gives us a thorough understanding of these complex biologic interactions. Armed with knowledge, we should be able to resist and counteract any sinister schemes to control behavior through chemicals.

More from this author:  Science and Truth: Learning from a Fatal Mutation

The bottom line

Who would have expected such important connections exist between the “lowly” honeybee and us humans? The more we learn about genomics, the more we come to realize that, in a sense, we are all One.

***

Love our content? Want more information on NATURE BIOLOGY, ENERGY BALANCE, OR ANIMAL BEHAVIOR? SIGN UP FOR OUR WEEKLY NEWSLETTER HERE


First published October 31, 2006, this article has been updated by the author for republication.

Music and dance are uniquely human activities that have accompanied human evolution since we became full-fledged social animals. And being social means communication.

Yes, language may have been the first on the scene as a mode of communication. But dance wasn’t far behind as a means of describing the world around us.

The language of music and dance is international and ancient

Tribes in New Guinea, Africa, and the jungles of South America have something in common. They all have dances describing the animals around them, recapitulating the hunt, and mourning their slow death.

You don’t have to go so far to see those dances. We once visited the Copper Canyon in Chihuahua, Mexico where the Tarahumara indigenous people live. There we saw a young boy perform his tribe’s ancient “hunt of the deer” dance. His mother and aunts and neighbors chanted the story while his cousin punctuated the dance with a drumbeat.

In hillside caves of southwestern Germany, archaeologists in recent years have uncovered the beginnings of music and art by early modern humans migrating into Europe from Africa. New dating evidence shows that these oldest known musical instruments in the world, flutes made of bird bone and mammoth ivory, are even older than first thought.

The most ancient flute is dated to 43,000 years ago. The flutes’ design and studies of other artifacts from the site suggest that music was an integral part of human life far earlier than first thought.

The evolution of music

So, how are you going to show that music actually evolved along the lines of Darwinian natural selection? Here is an ingenious experiment described in Science by Elizabeth Norton in an article titled, “Computer Program ‘Evolves’ Music From Noise”:

“Bioinformaticist Robert MacCallum of Imperial College London was working with a program called DarwinTunes, which he and his colleagues had developed to study the musical equivalent of evolution in the natural world. The program produces 8-second sequences of randomly generated sounds, or loops, from a database of digital ‘genes.’ In a process akin to sexual reproduction, the loops swap bits of code to create offspring. “Genetic” mutations crop up as new material is inserted at random. The “daughter” loops retain some of the pitch, tone quality, and rhythm of their parents, but with their own unique material added.”

This process sounds remarkably like what happens in the evolution of plants and animals complete with mutations that allow the music to change over time, doesn’t it?

Related reading: The Fascinating History of the Color Blue

The study in PNAS

In another study published in the online version of Proceedings of the National Academy of Sciences, MacCallum and colleagues adapted DarwinTunes to be accessed online by almost 7,000 participants who rated each sound loop, played in a random order, on a 5-point scale from “can’t stand it” to “love it”.

In a musical take on the survival of the fittest, the highest-scored loops went on to pair up with others and replicate. Each resulting generation was rated again for its appeal. After about 2,500 generations of sound loops, what started out as a cacophony of noise, had evolved into pleasant strains of music.

Melody out of chaos. Musical evolution may arise from tension between the vision of the composer (in this case Mozart) and the listener’s taste | Credit: Saverio della Rosa (1745-1821) | Source: Wikimedia

Interestingly, “crowd-sourcing” the evolution of music reached a plateau of pleasant, Muzak-like “composition”. And this is where the creativity of the composer or songwriter comes in to push it to a higher level.

As composer, musician, and computer programmer David Cope of the University of California, Santa Cruz points out, the composer is influenced by the crowd but in unpredictable ways. Mozart, for instance, took audience response personally, but usually continued or even exaggerated musical traits that listeners didn’t like.

Similarities with the evolution of language

Here is an experiment on the evolution of language that is remarkably similar to the evolution of music experiment. Compare and contrast.

In a report by Dennis Normile in Science titled, “Experiments Probe Language’s Origins and Development“, Professor Kirby and his colleagues at the University of Edinburgh “hypothesized that the transmission of a language from generation to generation played a critical role.”

To test this idea, they recruited volunteers to learn a fictitious “alien” language (recall the random sounds that were the starting point in the music experiment). Working at computer terminals, they were shown a series of words and the images they referred to. The words were actually randomly generated strings of syllables.

Each of the images had a unique combination of color, shape, and patterning. The participants were then shown images and asked to type in the appropriate words. They were also asked to produce words for images with color, shape, and patterning combinations they hadn’t specifically learned.

The words as given by one participant were used to train the next in line, a process called ‘iterated learning’ that resembles the cultural transmission of a language among generations.

Researchers tested different scenarios

Researchers tested different scenarios of that basic approach. In one, instead of individuals in each generation, there were pairs of participants who used the alien language to “communicate”, picking images from an array. (The pairs were separated and interacted via computer terminals so they could not point or gesture.)

The words they recalled after the communication exercise were used to train the next pair in the chain. Other pairs simply did the communication task – again via computer terminals – over and over without the “language” being passed to a new generation.

When pairs of humans learned the words, used them to communicate, and then passed them on through several generations, a compositional structure emerged. Parts of the words—the prefix, for example—consistently corresponded to color, and other parts became associated with shape or pattern.

Language became progressively easier to learn

Succeeding pairs found the language progressively easier to learn and use accurately. Pairs at the ends of the chains could even recombine the parts of the words to accurately label images they had not specifically learned.

When the language passed through a chain of individuals, thus skipping the communication step, it became ambiguous. One “word” could have multiple meanings. The pairs that worked just on the communication task eventually agreed on linking words with images. But the language remained an idiosyncratic pairing of syllables and meaning with no standardization or compositional structure.

To get structural properties and improve learnability, “what is really crucial [is] a combination of naive learners and communication,” Kirby says.

You might also enjoy: How Music Can Powerfully Heal a Physician’s Life

Conclusion

These experiments demonstrate how language and music evolved along the same lines. Through iterations of the selective process, a jumble of meaningless sounds or words gradually assumed a more coherent form that is recognizable by a whole population as a means of communication, be it language or music.

It took the creative genius of a Bach and a Mozart to catapult music to a higher level, and a Chaucer and a Shakespeare to do the same for language.

More by this author: What Can We Learn About Fatherhood From Science?

**LOVE OUR CONTENT? WANT MORE INFORMATION ON EVOLUTION,  HUMANITIES, AND NEUROBIOLOGY?  SIGN UP FOR OUR WEEKLY NEWSLETTER HERE**


This post was first published 11/20/17. It has been updated and revised for republication.

Lest we forget, our species is called Homo sapiens—the “thinking man”. We are supposed to be better than apes because, so the theory goes, we can “get into somebody’s head.” In other words, we know what other people know. We can attribute mental states such as intentions, goals, and knowledge to others.

So, you would think that a person of average intelligence would have no problem watching Kellyanne Conway or Sarah Huckabee Sanders (this list could go on and on) lying through their teeth—and figure out that they themselves don’t really believe the increasingly bizarre “alternative facts” that they are spinning.

That being said, apparently, about 30% of Americans apparently do believe every word they utter. They are products of our post-factual world. 

How has this happened when understanding false beliefs is supposed to be one of the things that define us as H. sapiens?

Getting into someone’s head

Here is a classic experiment that demonstrates how our minds work in this regard:

It starts by showing children a video of a doll named Sally hiding an item. Then, they see Sally leave the room and another doll comes into the room. The second doll takes the item and hides it in a different place.

When you ask the children where Sally will look for the item upon her return, very young children, less than approximately age four, will pick the new hiding place where they themselves know the item to be.

However, older children, after about age four, understand that Sally doesn’t know what they know. That is that the item was moved from its original hiding place to a new one.

They will answer the question by saying that Sally will look for the item where she originally left it. This experiment demonstrates that even children have the capacity to “get into somebody’s mind.”

Even apes can do it

In psychology, this capacity to know what’s going on in somebody’s mind – not just know but also feel what another person feels (which is the essence of empathy) – is called “the theory of mind”. Initially, it was thought that this was a uniquely human characteristic, but now we know better.

In a series of sophisticated experiments, using eye-tracking technology, scientists repeated the doll experiment with apes (except that the doll was now King Kong instead of Sally). They showed that apes are just as smart as the older children when it came to figuring out a false belief compared to their own knowledge of the facts.

Separating fact from fiction in a post-factual world

So, assuming that the 30% of people who believe Conway and Huckabee’s “post-facts ‘facts'” are more sophisticated than children and at least as sophisticated as the apes, what explains their inability to separate fact from fiction?

And lest I come across as rank partisan, I include as alternative fact believers the leftists and liberals who believe that vaccination causes autism despite the fact that the original publication claiming evidence for it was found to be fraudulent. And that numerous well designed scientific studies have debunked the claim. This includes an /April 2019 study of more than 650,000 children.

I also include in this group those that consider as an almost religious belief that GMO will kill you and that dairy food is toxic.

Not only does science not support these unfounded beliefs, but evidence to the contrary is also in plain sight. Witness the millions of people that eat and drink those ‘harmful’ things and are still alive and kicking and in excellent health. This includes those fanatic practitioners who unwittingly are consumers of GMO, which is in almost every food we consume nowadays, regardless of claims to the contrary.

An intriguing explanation

So, that being said, how do we know what we know? Just think for a moment. Suppose you are totally isolated from other human beings and all you know is from personal observation of your immediate environment. Obviously, your fund of knowledge is going to be pretty limited.

Two cognitive scientists, Philip Fernbach a cognitive scientist at the University of Colorado’s Leeds School of Business and Steven Sloman a professor of cognitive, linguistic and psychological sciences at Brown University, write in their NYT article, that all human knowledge is shared. For example, we know that the earth is round, but this knowledge came not from our own observation. It came from scientists and teachers who shared the knowledge.

The evolutionary imperative of sharing knowledge is obvious. Could a stone age hunter have survived very long hunting all alone? Of course not. It took a tribe, with shared knowledge and strategy, to corner fleet-footed prey and overcome ferocious predators.

People who believe non-facts are not stupid

The issue at hand is not that the people who believe non-facts are stupid—they are not. They are simply sharing, knowingly or unknowingly, non-factual knowledge.

In the words of Fernbach and Sloman, mentioned above:

“It is remarkable that large groups of people can coalesce around a common belief when few of them individually possess the requisite knowledge to support it.”

Their solution to the problem? Insist on “expertise and nuanced analysis from our leaders.”

Ha! But they just got through telling us that, individually, we are pretty much ignorami. And that we are naturally prone to believe in whatever falsehoods “leaders” feed us. It takes a great leap of faith to believe that our “leaders” would be honest enough, thoughtful enough, empathic enough, to think of the common good rather than their own.

An alternate theory

I am a bit more cynical. I believe that people act, and vote, according to their instincts, beliefs embedded in their psyche over a lifetime, prejudices, and not necessarily through rational analysis.

They will vote for people that, on a gut level, seem like them. This is either through holding the same prejudices, or using the same language, or coming from a social background that is similar to theirs. These factors are not quantifiable. It is gut level reign supreme.

It is basically Kahneman’s System 1 – thinking system that underlies instinctual behavior. It contrasts with Kahneman’s System 2 which is thinking that drives logical and analytic behavior.

The strengths of these beliefs

Once we form our view of the world, it is hard to change it. As I said above, our mind perceives it as existential. As examples, if we grew up in the Bible Belt, we “cling to our Bible and guns”, as someone once said. But if we live in the deeply blue San Francisco Bay Area, we simply cannot abide by the perceived radical conservatism of the Deep South.

We are tribal by nature. And the common belief in alternative facts is not an accident. It makes the worldview of the true believers internally consistent, hence, its strength.

Related Content by this author: Random Thoughts on ‘The Theory of Everything

Can this change these minds?

I am afraid not by much. Our minds are inherently lazy. Kahneman’s instinctual System 1, mentioned above, rules supreme in the brain. It takes time and persistent counter-experiences to convince us that this system is wrong for us.

In plain language, “to change our minds” requires that that System 2 thinking about an experience must eventually become embedded into System 1 (instinctual).

It will take major outbreaks of polio with its devastating consequences to finally convince the mothers of Rockland County and other high vaccine refusal areas to accept childhood vaccination. Evidently, even the current measles epidemic is not enough.

Remember, during the crack epidemic of the 1980s, it took widespread unemployment and major outbreaks of crack addiction in their own communities for people to realize that “their tribe” is not immune from the “other tribe’s” problems.

The bottom line

Post-factual beliefs are deeply ingrained as a result of our tribal instincts. They are highly resistant to change. It is not going to be easy to break down the walls that tribalism has erected in our post-factual world. 


Originally published in September 2017, this post has been revised and updated for republication.

 

Dedicated to my friend James who is battling memory loss following chemotherapy.

In 2012, I published a story that I called, “Forgetful? Go Jogging.” It described the findings from a study published in the July 2012 issue of the journal Neuroscience. It showed that exercising for a month, either walking or jogging four times a week, improved the results of memory and mood tests. A control group that did not exercise showed no change from the initial tests. This was not a surprise as previous observational studies had already suggested that exercise was beneficial for the brain.

What was the most interesting finding in the study was that the behavioral improvements were positively correlated with a rise in a blood and brain protein called Brain-Derived Neurotrophic Factor or BDNF.

Why you may be wondering, am I writing about the results of a 2012 paper in 2019? Let me explain.

More about BDNF

As its name implies BDNF is a protein that is secreted by neurons. Its function is to maintain the health and normal functioning of existing neurons as well as the generation of new neurons. What makes BDNF extra interesting is the location in the brain where it is secreted: the hippocampus (and specifically in an area called the dentate gyrus), the brain structure where memories are created and stored.

Now, just showing a correlation between A and B does not mean the two things are causally related. In other words, it does not mean that A caused B to happen. However, if we could show a molecular mechanism that plausibly explains the correlation, it would help a lot in lending credibility to the causality between the two.

Specifically, if one could show how exercise could affect the expression of the gene that controls the synthesis of BDNF, the case for a cause and effect becomes significantly stronger. It would support the hypothesis that exercise increases brain function.

Epigenetics

It is known that non-genetic factors, such as environmental chemicals and behavior, can affect gene expression. Such changes are called epigenetic.  Examples of mechanisms that produce such changes are DNA methylation and histone modification, each of which alters how genes are expressed without altering the underlying DNA sequence.

The November 2018 issue of The Scientist published an article titled, How Exercise Reprograms the Brain. It reported on an experiment performed by Hiroshi Maejima of Hokkaido University in Japan. He investigated the possible epigenetic effect of exercise on the synthesis of BDNF.

Maejima’s team found that the brains of mice that ran on a treadmill had greater than normal histone acetylation in the hippocampus, the brain region considered the seat of learning and memory. This epigenetic change resulted in higher expression of the gene Bdnf that controls the synthesis of the protein BDNF.

It turns out that BDNF is released by neurons when they fire and that happens when we exercise. The end result: exercise led to higher BDNF levels bathing the neurons.

The impact of BDNF

The high level of BDNF has two important consequences:

  • it causes the formation of new neurons, a process called neurogenesis, and
  • it increases the number of dendrites per neuron.

Dendrites are small “bumps” on the axon that are the points where adjacent neurons communicate with each other (called synapses).

The functional consequences of the anatomical changes are increased plasticity as well as increased capacity to form new memory circuits. These anatomical changes and their functional consequences are collectively called Long-Term Potentiation (LTP).

Long-term potentiation

Another study, published in the Proceedings of the National Academy of Sciences, titled “Running enhances neurogenesis, learning, and long-term potentiation in mice” reported on an experiment in which spatial learning and long-term potentiation (LTP) were tested in groups of mice. One group was housed with a running wheel (runners), the other lived under standard conditions (controls).

The researchers found that running improved water maze performance (a standard test for memory in mice), increased new neuronal cell numbers, and selectively enhanced dentate gyrus LTP, the hippocampus area involved in memory. These results indicate that physical activity can regulate hippocampal neurogenesis, synaptic plasticity, and learning.

Not just memory

By supporting the growth and maturation of new nerve cells, BDNF promotes brain health. Further, higher levels of it correlate not only with memory but with generally improved cognitive performance in mice and humans.

In 2014 Marily Oppezzo and Daniel Steinberg of Stanford University published an interesting article, “Give Your Ideas Some Legs: The Positive Effect of Walking on Creative Thinking,” in the Journal of Experimental Psychology.

They divided 176 college students and adults into two groups, one of which walked while taking a creativity test; the other was sedentary during the test. The walkers scored 81 percent higher. This seems to confirm numerous anecdotal reports of academics and business leaders who claim to get their best ideas while running or walking.

The Body-Brain Connection

When we refer to the brain-body connection, we are normally thinking about how the brain affects our body. But what about this connection flowing in the opposite direction? Can bodily function affect the brain? Indeed, the myriad experiments showing the effects of aerobic exercise on brain structure and function suggest the existence of such an influence.

An international team of scientists recently found that mice that ran frequently on wheels had higher levels of BDNF and of the ketone ß- hydroxybutyrate, a byproduct of fat metabolism released from the liver.

Injecting the ketone into the brains of mice that did not run helped to inhibit histone deacetylases and increased Bdnf expression in the hippocampus. The finding shows how molecules can travel through the blood, cross the blood-brain barrier, and activate or inhibit epigenetic markers in the brain.

Cathepsin B

In 2016, van Praag and her team found that a protein called cathepsin B, which is secreted by muscle cells during physical activity, was required for exercise to spur neurogenesis in mice. In tissue cultures of adult hippocampal neural progenitor cells, cathepsin B boosted the expression of Bdnf and the levels of its protein BDNF. It also enhanced the expression of a gene called doublecortin (DCX), which encodes a protein needed for neural migration.

The proof that Cathepsin B is indeed required for bdnf expression can be obtained by genetically creating mice whose Cathepsin B gene expression is suppressed, or knocked out. These mice, called Cathepsin B knockouts, had no change in neurogenesis following exercise.

Van Praag’s team also found that nonhuman primates and humans who ran on treadmills had elevated blood serum levels of cathepsin B after exercising. Following four months of running on the treadmill three days per week for 45 minutes or more, participants drew more-accurate pictures from memory than at the beginning of the study, before they started exercising.

Other exercise-induced changes

The 100 billion neurons that populate the brain require 20% of total energy consumption and 15% of blood flow. So it is not surprising that physical activity also prompts other hormone factors into action: Insulin-like growth factor (IGF-1), vascular endothelial growth factor, and fibroblast growth factor all cross the blood-brain barrier and work with BDNF to enhance the molecular machinery of learning. In addition, the hormone IGF-1 delivers the brain’s primary fuel — glucose — to neurons to spur learning.

Not so simple, and yet quite simple

As we can see BDNF is indeed important, but it doesn’t and probably cannot act by itself. A complex orchestra of proteins and other molecules has to play together to allow BDNF exert its full effect.

And yet, the bottom line is quite simple. We ordinary humans don’t have to understand the complicated mechanistic details of BDNF function because our physiology takes care of that flawlessly. All we need to know is that some kind of aerobic exercise, be it jogging, walking, bicycling, swimming – whatever makes you breathe hard and increases your heart rate for a minimum of 30 minutes, 3 times a week or more will help keep your brain and your body in good shape – proving once again that exercise is one of the best medicine’s known to man. Need more incentive? Think you are too old to start? Read this: A 92-year-old woman in Australia has broken several world records for racewalking since she started her athletic career about seven years ago. And she has no plans to slow down.

Related Content: 7 Simple and Not-So-Simple Strategies to Maintain a Healthy Brain

In this political season of outrageous lies, I sometimes wonder: How can politicians say things with a straight face that they know full well aren’t true? How did it get that way? Are liars born honest and become corrupted later, or are they born to become lying politicians, or businessmen, or even scientists who manipulate data? In other words, is dishonesty hard-wired, or is it a learned behavior? What does science say?

Experimentally, it’s a tough problem. Some studies have involved putting experimental subjects in an MRI machine and instructing them to lie. But that’s akin to telling a subject to cry and then assume that his or her scan revealed anything relevant to a real life situation.

Many liars, some famous, some less famous, but all having gotten caught, trace back the origins of their big lies to a succession of smaller, ostensibly insignificant lies. Which begs the question: Why is it so? Why don’t big liars start with the big lies? What inhibited them in the first place and what allowed them to progress to the point of major deception?

An ingenious experiment

Dr. Tali Sharot and her team of University College London and Duke researchers conducted a series of experiments that start to answer some of these questions. To test for dishonesty escalation and its underlying neurological mechanism, they combined brain imaging with a behavioral task in which individuals were given repeated opportunities to act dishonestly.

The 80 participants in the study were given the task of estimating the number of coins in a glass jar that contained pennies valued at somewhere between 15 and 35 British Pounds (between $18 and $43). Subjects were shown large, high-resolution images of the penny-containing jars for 3 seconds. They were told that their “partner” (actually, a confederate of the researchers) would be shown a smaller picture of the jar for 1 second. The participants were told the partner’s goal was to estimate the amount with the help of their advice. This scenario is neutral, with no incentive to lie, and it served as a baseline for the later studies.

Next, the investigators changed the scenarios so as to give the participants incentives to lie. One scenario would benefit the participant at the expense of the partner; they were told they would be rewarded according to how much their partner overestimated the amount whereas their partner would be rewarded for accuracy. In other words, the participant would be rewarded for lying at the expense of the partner. Another scenario would benefit the partner at the expense of the participant, another would benefit the partner without affecting the participant, and yet another would benefit the participant without affecting the partner. Comparing the participant’s estimates in the different scenarios allowed the team to measure degrees of dishonesty.

Ingenious, yes? But the team didn’t stop there; they repeated the presentations 60 times, thus getting a measure of the effect of lying repeatedly. Furthermore, 25 of the participants conducted the tasks in a fMRI scanner.

Not all lies are created equal

Duh, everybody knows that. We call it “white lies” and “big lies”. How many times have you said “OMG, you look great!” to somebody who is seriously ill? Or on the opposite end, “I never said it” when the evidence is on video for all to see? But, by repeating the experiment 60 times, the researchers were able to see another aspect of dishonesty—its development.

The findings were truly amazing. After the first presentation, the participants in the scenario that allowed them to profit at the expense of the partner had the lowest degree of lying. And in the scenario where both participant and partner stand to benefit from lying, the degree of dishonesty was the highest. It’s as if people’s conscience is bothered when they benefited at the expense of somebody else, but they are much less bothered when both they and the other person benefit.

As Dr. Sharot said, this suggests that

“people lie most when it’s good for them and the other person. When it’s only good for them but hurts someone else, they lie less.”

But then, when presentations of the jar were repeated an additional 59 times, something interesting emerged. The initial level of dishonesty remained the same throughout the repetitions in all the scenarios except one. In scenarios where the participants benefited, the level of dishonesty increased as the number of presentations increased.

In other words, people lie but lies only increase with repetition when the participant benefited. Self-interest is the thing that pushes people down the slippery slope.

The slippery slope in the brain

As I said, 25 of the participants lied while lying in the MRI machine. The area of the brain that “lit up” the most, or showed the greatest enhancement of metabolic activity, was a pair of almond-shaped neural centers called the amygdala. This region of the brain coordinates emotional responses. These can range from fight or flight to anger and aggression. But they also deal with the emotional discomfort we feel when our actions do not comport with our conscience—cognitive dissonance, as it is known in psychology.

This finding was almost predictable. But the interesting aspect of this study was the effect of repetition. As participants repeated the lying, the activity in the amygdala progressively declined. And larger reductions predicted bigger subsequent lies. I find this remarkable. The slippery slope is not just the favorite warning of scolds, it really exists in the brain.

The role of adaptation 

But why did the spikes of activity decline with repetition of the lying? The authors suggest that this is a manifestation of a phenomenon called adaptation. The same process occurs when you repeatedly show people unpleasant images. Remember the picture of the dead Syrian boy on the beach? Or the planes flying into the Twin Towers? The first time we saw them, we were horrified. But when these images were put on an endless loop day and day out, our reactions became progressively muted. We and our brain underwent adaptation.

Before we close, a word of caution: Emotions and behavior, in general, are controlled by several areas of the brain, the amygdala is only one of them, albeit, an important one. For instance, I would be curious to know if the reward system is involved in some way. Does it play an inhibitory role in lying? Or is it just wishful thinking: It could actually play a permissive role, increasing the reward reaction as smaller lies progress to bigger ones. I hope not.


Originally published 11/21/2016, this post has been reviewed and updated by the author.

We intuitively feel that we exercise free will in our decision-making. But as neurobiologists have suggested, this is a bit illusory. They have shown that circuits in different regions of the brain are activated fractions of a second before we consciously make a decision. This phenomenon may reflect the fact that decisions are not made in a vacuum. They are made in the context of our previous experiences and the observed brain activity is a reflection of the subconscious retrieval and organization of the relevant memories into a coherent framework that will affect, if not pre-determine, a conscious decision.

But, you might argue, we still own that decision; it is based on our own memories and experiences. So in a way, it was generated within us, not imposed or influenced by somebody else; in other words, it is pretty close to our understanding of “free will”. Well, not so fast.

 

What makes a memory?

Memories can be individual, unadulterated by external influences. For instance, I can vividly remember the smells of my mother’s kitchen when she was baking cheese blintzes. I can also remember how delicious they were. And I recall the pangs of regret after (alas, not while) devouring a dozen of them. These memories were generated by me; nobody generated them for me, nor altered them after the fact. If I see blintzes on the restaurant menu, all those memories are going to feed into the decision I make about whether to order one (or a dozen).

But I also have memories that I am not so sure are truly unadulterated or even purely factual. These are the kind of memories that have been generated but then altered by social influences. I loved my high school history teacher. But I gradually changed my memory of him because whenever I got together with my classmates, their memories were of a pretty unappealing character. In a word, I conformed to the prevailing memory.

Social psychologists distinguish two types of social conformity. If other people’s memory agrees with yours but updates and reinforces yours, you accept it as your own. This is called private conformity. But there is another type of social conformity. You are totally confident that your memory is accurate and at variance with the others, but under social pressure, you conform to the group’s memory. This is called public conformity. Interestingly, private conformity tends to be long-lasting, whereas public conformity tends to be short-lived, or transient.

You can now see how our memories may not be truly individual. They are subject to social influences, some positive (they correct factual errors and omissions), and some negative (they manipulate the memory of facts to conform to the “accepted” version, regardless of veracity). This is not just an exercise in theoretical psychology. It has important social implications.

Remember the wave of “recovered memories” stories of child abuse? They provided newspapers with sensational stories and lawyers with fat fees, but not before the claimed psychological basis was shown to be largely bogus.

Or how about the eyewitness testimonies that are still being obtained under police or prosecutor pressure? The profound effects of social conformity on our personal and social lives demand that it be examined rigorously, using the latest scientific tools available.

 

The neurobiology of social conformity

A recent controversy flared up about special prosecutor Robert Mueller’s statement that Russian intelligence operatives tried to influence the 2016 election results by sowing discord in our political discourse. The Russian operation began about four years ago, well before Mr. Trump entered the presidential race, a fact that he quickly seized upon in his own defense:

Russia started their anti-US campaign in 2014, long before I announced that I would run for President,” he wrote on Twitter. “The results of the election were not impacted. The Trump campaign did nothing wrong—no collusion!

Really? What does science say about the potential impact?

A group of investigators from the neurobiology department of the Weizmann Institute in Israel studied the brain “signature” of social conformity. They used fMRI to record the brain activity of 30 adults who viewed a documentary-style movie and then were tested on their memories of the movie over a 2-week period. The researchers intentionally tried to induce memory errors in some subjects by telling them what others recalled about the movie; they exposed other subjects to randomized “recollections”.

The researchers observed greater neural activation in the hippocampus for items that showed persistent memory errors (private conformity) than for items that displayed transient errors (public conformity). They were also able to distinguish between conformity elicited by social influences (being exposed to other people, or at least their faces) and conformity produced by nonsocial methods (being exposed to computer-generated responses to questions on a test).

The investigators observed strong activation of the amygdala in subjects who displayed social conformity (responding when influenced by other people’s responses). In contrast, they observed less activation in subjects displaying nonsocial conformity (responding to computer-generated responses). This finding mirrored behavioral data suggesting that greater conformity occurred under social pressure.

 

Memory manipulation

The major importance is social. For the first time, it demonstrated that the way social pressure affects our memory is literally etched in our mind—it activates new pathways. For instance, the amygdala has never been known to be involved in memory. Their involvement in memory-under-social-pressure is surprising (they have been known to be primarily involved in controlling emotions). It opens new possibilities in understanding psychological states such as cognitive dissonance and conflicts of conscience.

But there is another aspect to this study. The other side of the social conformity coin is memory manipulation. We have already mentioned it in relation to psychotherapists, police, and lawyers. But there are bigger manipulators, more insidious and dangerous: think propaganda machines of political parties, think totalitarian regimes rewriting history.

In the 1920’s, there was hardly any antisemitism in Germany. The Jewish population was in the center of the cultural and economic life of the country. Goebbels’ propaganda machine, through incessant repetition of “the big lie” planted in the collective German mind a new bogus memory. The Soviets distorted historical facts and used the Goebbels’ tactic to manipulate their own masses. And without getting overly political, one doesn’t have to dig deep to find the same tactics being used in our country, attempting to manipulate people’s collective memory.

Mark Twain’s pithy aphorism that

“A lie can travel halfway around the world while the truth is putting on its shoes”

is true, but incomplete. Who, or what is at fault?

A major new study published in the journal Science finds that false rumors on Twitter spread much more rapidly, on average than those that turn out to be true. Interestingly, the study also finds that bots aren’t to blame for that discrepancy. People are.

The paper, authored by scholars at the MIT Media Lab, analyzed an enormous data set of 126,000 rumors that were spread on Twitter between 2006 and 2017, generating tweets from more than 3 million different accounts. Specifically, they looked at claims that were subsequently evaluated by major fact-checking organizations and found to be either true, false, or some combination of the two.

They found that false rumors traveled “farther, faster, deeper, and more broadly than the truth in all categories of information,” but especially politics. On average, it took true claims about six times as long as false claims to reach 1,500 people, with false political claims traveling even faster than false claims about other topics, such as science, business, and natural disasters.

Medicine has taught us that in order to effectively combat a disease we need to gain a detailed knowledge of its inner workings. Likewise, I believe that scientific understanding of the mechanisms underlying our social interactions may help in creating a healthier society. The Israeli paper and the MIT paper are important first steps in this journey.

More by this author: How Did We End Up in a Post-factual World?


This post was originally published August 2011. It was updated by the author on March 11, 2018.