Monday, December 31, 2012

New study of how Gaelic affects brain functions

Scientists are to investigate changes in brain functions among people who are fluent in English and Gaelic.

The study involving Glasgow and Edinburgh universities will require its test subjects to speak Gaelic exclusively for about 40 days.

The research aims to add new scientific evidence to suggestions that people who are bilingual have enhanced problem-solving skills and flexible thinking.

The study will include MRI scans to help detect changes in brain functions.

Scientists from Scotland, Belgium and Germany leading the research said the experiments would be entirely non-invasive.

They will be carried out at University of Glasgow's Centre for Cognitive Neuroimaging, with approval of the College of Science and Engineering's ethics committee.

Dr Meike Ramon, of the University of Glasgow and Belgium's Universite catholique de Louvain, said brain functions changed when people performed specific tasks.

She said it should be possible to identify changes before and after someone has spoken Gaelic over a long period.

Physical tasks

Research published in August suggested bilingual children outperform children who speak only one language in problem-solving skills and creative thinking.

Researchers set lingual, arithmetical and physical tasks for 121 children, aged about nine, in Scotland and Sardinia, Italy.

They found that the 62 bilingual children were "significantly more successful in the tasks set for them".

The study was published in the International Journal of Bilingualism.

The Glasgow-based children spoke English and Gaelic, or English only, while the Sardinian cohort spoke either Italian only, or Italian and Sardinian.

They were asked to reproduce patterns of coloured blocks, to repeat orally a series of numbers, to give clear definitions of words and to resolve mentally a set of arithmetic problems.

The tasks were all set in English or Italian.

Researchers found that the bilingual children were "significantly more successful in the tasks set for them".

Family members

Last month, research published by the University of the Highlands and Islands suggested that generations of families that speak Gaelic use the language in different ways.

Gaelic dominates the conversations of family members aged between 53 and 71.

Second and third generations, family aged 16-37 and three to seven, mostly use English.

But the research also found adults spoke Gaelic when talking to children, who in turn would reply in the language.

BBC News. 2012. “New study of how Gaelic affects brain functions”. BBC News. Posted: December 20, 2012. Available online:

Sunday, December 30, 2012

Tracing humanity's African ancestry may mean rewriting 'out of Africa' dates

UAlberta archeologist's new research may lead to rethinking how and when our ancestors left Africa to colonize the globe

New research by a University of Alberta archeologist may lead to a rethinking of how, when and from where our ancestors left Africa.

U of A researcher and anthropology chair Pamela Willoughby's explorations in the Iringa region of southern Tanzania yielded fossils and other evidence that records the beginnings of our own species, Homo sapiens. Her research, recently published in the journal Quaternary International, may be key to answering questions about early human occupation and the migration out of Africa about 60,000 to 50,000 years ago, which led to modern humans colonizing the globe.

From two sites, Mlambalasi and nearby Magubike, she and members of her team, the Iringa Region Archaeological Project, uncovered artifacts that outline continuous human occupation between modern times and at least 200,000 years ago, including during a late Ice Age period when a near extinction-level event, or "genetic bottleneck," likely occurred.

Now, Willoughby and her team are working with people in the region to develop this area for ecotourism, to assist the region economically and create incentives to protect its archeological history.

"Some of these sites have signs that people were using them starting around 300,000 years ago. In fact, they're still being used today," she said. "But the idea that you have such ancient human occupation preserved in some of these places is pretty remarkable."

Magubike: Home to a modern Stone Age family?

Willoughby says one of the fascinating things about Magubike is the presence of a large rock shelter with an intact overhanging roof. The excavations yielded unprecedented ancient artifacts and fossils from under this roof. Samples from the site date from the earliest stages of the middle Stone Age to the Iron Age. The earlier deposits include human teeth and artifacts such as animal bones, shells and thousands of flaked stone tools.

The Iron Age finds can be dated using radiocarbon, but the older deposits must go through more specialized processes, such as electron spin resonance, to determine their age. Other parts of the Magubike rock shelter, excavated in 2006 and 2008, include occupations from after the middle Stone Age. Taken together, this information could be crucial to tracking the evolutionary development of the inhabitants.

"What's important about the whole sequence is that we may have a continuous record of human occupation," said Willoughby. "If we do—and we can prove it through these special dating techniques—then we have a place people lived in over the bottleneck."

Rugged, hilly terrain may have been key to survival

The team made similar findings at Mlambalasi, about 20 kilometres from Magubike. Among the findings at this site was a fragmentary human skeleton that probably dates to the late Pleistocene Ice Age—after the out-of-Africa expansion but at the end of the bottleneck period. The bottleneck theory explains what geneticists have found by studying the mitochondrial DNA of living people—that all non-Africans are descended from one lineage of people who left Africa about 50,000 years ago.

Reconstructions of past environments through pollen and other archeological records in Iringa suggest that people abandoned the lowland, tropical and coastal areas during that period but remained in the highlands, where vegetation has remained mostly unchanged over the last 50,000 years. Those who moved to higher ground may have found what is likely one of the few places that facilitated their survival and forced their adaptation. Further testing will determine whether these findings point to a clearer link to our African ancestors—a find Willoughby says could put that region of Tanzania on many archeologists' radar.

"It was only about 20 years ago that people recognized that modern Homo sapiens actually had an African ancestry, and everyone was focused on looking at early Homo sapiens in Europe who appeared around 40,000 years ago," she said. "But we now know that as far as back as around 200,000 years ago, Africa was inhabited by people who were already physically exactly like us today or really close to being the same as us. All of a sudden, it's not Europe in this time period that's really important, it's Africa."

Engaging community yields co-operation, opportunity

Along with its scientific significance, Willoughby's work may be a linchpin to potential economic growth for the region. Since 2005, when a local cultural officer showed her the sites, she has been sharing information about her research with local citizens, schools and government—opening up opportunities for more research and co-operation. She keeps the region informed of the team's findings through posters distributed around Iringa, and has asked for and accepted assistance from local scholars. Now the community is also looking for her help in establishing the historic sites as a tourist attraction that will benefit the region.

Willoughby says she feels fortunate to have the support of the Tanzanian people. She tells people it is a shared history she is uncovering, something she is honoured to be able to do.

"They're telling me, 'You're putting Iringa on the map,'" she said. "As long as they keep letting me work there, and keep letting the people working with me work there, we'll be happy."

EurekAlert. 2012. “Tracing humanity's African ancestry may mean rewriting 'out of Africa' dates”. EurekAlert. Posted: December 13, 2012. Available online:

Saturday, December 29, 2012

Most Culturally Diverse City Per Mile Named

Manchester, England, is home to people who speak at least 153 different languages, indicating that it is  the most culturally diverse city for its size in the world.

That level of diversity even goes beyond the scope of a city.

"Manchester's language diversity is higher than many countries in the world," Yaron Matras, who co-organized a study on Manchester, was quoted as saying in a press release.

Cities such as New York, London and Paris also rank highly, but they are much larger. For example, the population of New York City is just over 8.2 million, based on 2011 U.S. Census data. Manchester's population is estimated at about 503,000.

The people of Manchester, described as "Mancunian," represent nearly every culture on the planet, census data reveals. Many of these residents hold on to their family's heritage.

"We do know that around two thirds of Mancunian schoolchildren are bilingual- a huge figure which indicates just how precious its linguistic culture is," Matras said. "As immigration and the arrival of overseas students to the city continues, it's fair to say that this already large list (of languages spoken in the Manchester) is set to grow."

Matras helped to establish Multilingual Manchester, an archive that contains over 100 reports on multilingualism and language minorities in Manchester. Examples of languages spoken in the city include Chitrali from North Pakistan, Konkani from Western India, Dagaare from Ghana and Burkina Faso and Uyghur from NW China.

The European Union could have helped to forge the city's linguistic and cultural diversity.

Matras explained, "Because of EU enlargement and the access granted to new EU citizens, language diversity in Manchester is more dynamic than most cities. Melbourne, for example, is famous for its many languages, but as it tends to have very established communities, it will be less diverse than Manchester."

Grasping the scope of the city's diversity has important implications. All aspects of city life, from education to health care, require sensitivity to the particular needs of individuals. Good communication is key, so the researchers are working closely with local authorities and schools to advise on matters pertaining to language, as well as on insuring proper representation for groups deemed to be vulnerable.

While the city faces challenges, its rich cultural offerings, strength and unique character prevail. Travelers seem to agree. Manchester is now the third-most visited city in the UK by foreign visitors, after London and Edinburgh. It is the most visited city in England outside London.

Vegas, Jennifer. 2012. “Most Culturally Diverse City Per Mile Named”. Discovery News. Posted: December 13, 2012. Available online:

Friday, December 28, 2012

How Did Female Genital Mutilation Begin?

United Nations Member States recently approved the first-ever draft resolution calling for a global ban on female genital mutilation/cutting (FGM/C).

Hailed by Secretary-General Ban Ki-moon as a major step forward in protecting women and girls and ending impunity for the harmful practice, the text is expected to be endorsed by the UN general assembly this month.

How did the practice begin anyway?

Although theories on the origins of FGM abound, no one really knows when, how or why it started.

"There's no way of knowing the origins of FGM, it appears in many different cultures, from Australian aboriginal tribes to different African societies," medical historian David Gollaher, president and CEO of the California Healthcare Institute (CHI), and the author of "Circumcision," told Discovery News.

Used to control women's sexuality, the practice involves the partial or total removal of external genitalia. In its severest form, called infibulation, the vaginal opening is also sewn up, leaving only a small hole for the release of urine and menstrual blood.

While the term infibulation has its roots in ancient Rome, where female slaves had fibulae (broochs) pierced through their labia to prevent them from getting pregnant, a widespread assumption places the origins of female genital cutting in pharaonic Egypt. This would be supported by the contemporary term "pharaonic circumcision."

The definition, however, might be misleading. While there's evidence of male circumcision in Old Kingdom Egypt, there is none for female.

"This was not common practice in ancient Egypt. There is no physical evidence in mummies, neither there is anything in the art or literature. It probably originated in sub-saharan Africa, and was adopted here later on," Salima Ikram, professor of Egyptology at the American University in Cairo, told Discovery News.

Historically, the first mention of male and female circumcision appears in the writings by the Greek geographer Strabo, who visited Egypt around 25 B.C.

"One of the customs most zealously observed among the Egyptians is this, that they rear every child that is born, and circumcise the males, and excise the females," Strabo wrote in his 17-volume work Geographica.

A Greek papyrus dated 163 B.C. mentioned the operation being performed on girls in Memphis, Egypt, at the age when they received their dowries, supporting theories that FGM originated as a form of initiation of young women.

Other writers later explained that the procedure was carried for less ritualistic reasons.

According to the 6th century A.D. Greek physician Aetios, the cutting was necessary in the presence of an overly large clitoris.

Seen as "a deformity and a source of shame," the clitoris would produce irritation for its "continual rubbing against the clothes" thus "stimulating the appetite for sexual intercourse."

"On this account, it seemed proper to the Egyptians to remove it before it became greatly enlarged, especially at that time when the girls were about to be married," Aetios wrote in The Gynecology and Obstetrics of the Sixth Century A.D.

According to U.S. historian Mary Knight, author of the paper "Curing Cut or Ritual Mutilation?: Some Remarks on the Practice of Female and Male Circumcision in Graeco-Roman Egypt," medical motivations probably mixed with ritual, social and moral reasons to favor "the continuation of a practice that initially may have been narrowly performed and whose original motivation most likely had long been forgotten."

Many centuries later, 19th century gynaecologists in England and the United States would perform clitoridectomies to treat various psychological symptom as well as "masturbation and nymphomania."

"The surgeries we see in Victorian England and America were generally based on a now discarded theory called 'reflex neurosis,' held that many disorders like depression and neurasthenia originated in genital inflammation," Gollaher said.

"The same theory was behind the medicalization of male circumcision in the late 19th century," he added.

It is only relatively recently that FGM has been recognized internationally as a violation of the human rights of girls and women.

Sweden was the first Western country to outlaw FGM, followed in 1985 by the UK. In the United States it became illegal in 1997, and in the same year the WHO issued a joint statement with the United Nations Children’s Fund (UNICEF) and the United Nations Population Fund (UNFPA) against the practice. FGM is a crime in many countries now.

Last week the head of the Organization of Islamic Cooperation also called for abolishing female genital mutilation.

"This practice is a ritual that has survived over centuries and must be stopped as Islam does not support it,"Secretary-General Ekmeleddin Ihsanoglu said at the intergovernmental organisation's 4th conference on the role of women in development, in Jakarta, Indonesia.

An estimated 140 million girls and women now alive have undergone the mutilating procedure in 28 African countries, as well as in Yemen, Iraq, Malaysia, Indonesia and among certain ethnic groups in South America and some immigrant communities in the West.

About three million girls in Africa are said to be forced to undergo the procedure each year. The cutting is often done without anaesthetic, in conditions that risk potentially fatal infection -- often using scissors, razor blades, broken glass and tin can lids.

Although not legally binding, the UN resolution carries considerable moral and political weight.

Lorenzi, Rossella. 2012. “How Did Female Genital Mutilation Begin?”. Discovery News. Posted: December 10, 2012. Available online:

Thursday, December 27, 2012

30,000 year old engraved stone found in China intrigues archaeologists

Cognition and symbolic thought are often viewed as important features of modern human behaviour. Engraved objects are seen as hallmarks of this cognition and symbolism and even as evidence for language, indeed they are usually considered as one of the most important features of modern behaviour.

Careful re-analysis of lithic assemblage

During analysis in 2012 of the stone tools that had been unearthed at the Shuidonggou site in 1980, an interesting engraved stone artefact was discovered among the assemblage.

Dr Fei Peng, postdoctoral research fellow at the Graduate University of the Chinese Academy of Sciences and lead author of the paper in the Chinese Science Bulletin that reported the discovery was amazed at what they found.

“It is the first engraved non-organic artefact from the whole Palaeolithic period in China”, Dr Peng said. However, this discovery was not entirely a lucky coincidence as the team were aware that when analysing materials unearthed from the site during excavations in the 1920s, French archaeologist Henry Breuil observed parallel incisions on the surface of some of the siliceous pebbles.

Unfortunately, he did not provide any further details regarding the incised pebbles he had noticed, so during the lithic analysis the team took great care to be aware of the potential existence of engraved objects.

The artefact in question is made of a siliceous limestone and measures 68 x 36 x 23 mm. One of the cortical faces bears 8 lines, clearly visible to the naked eye and inscribed deeply into the thick cortex. All the incisions are perpendicular to the long axis of the core and while two incisions are crossed the others are roughly parallel.

Prof Xing Gao of the Institute of Vertebrate Palaeontology and Palaeoanthropology in Beijing, co-author of the paper, said:“[the] Shuidonggou site includes 12 localities, ranging in date from Early Late to Late Palaeolithic. The engraved stone artefact was found at Locality 1, which is [dated to] 30,000 years old.”

Examination confirms the discovery

Digital microscopy was used to further examine every incision and 3D images were created to enable examination of the artefact at the minutest level.

After excluding the possibility of natural cracking, animal-induced damage and even unintentional human by-products, it has been concluded that the incisions could only be made by intentional and deliberate behaviour.

Although the function of these incisions is uncertain, the straight shape of each line shows it was incised once over a short time interval without repeated re-cutting, implying the intriguing possibility of counting.

In addition to the engraved stone artefact, a single ostrich egg bead was unearthed from Locality 1. The lithic assemblage of this locality includes blade production and elongated tool blanks. A blade technology that was probably introduced from the Altai region of Russian Siberia, matching comparable lithic assemblages.

Further questions

As it stands, a definitive scenario cannot be provided by the archaeological team, making a requirement for more research into this particular puzzle. This discovery opens up new questions regarding the creators of both the ostrich eggshell beads and the engraved stone. Were they made by the populations who migrated from the west, perhaps from what is now the Altai region, or were they the result of acculturation, where a population in north China learned this technology from an incoming group or individual? Perhaps they were created by the local people themselves as part of a cognitive advancement?

Past Horizons. 2012. “30,000 year old engraved stone found in China intrigues archaeologists”. Past Horizons. Posted: December 5, 2012. Available online:

Wednesday, December 26, 2012

Africa's Homo sapiens were the first techies

The search for the origin of modern human behaviour and technological advancement among our ancestors in southern Africa some 70 000 years ago, has taken a step closer to firmly establishing Africa, and especially South Africa, as the primary centre for the early development of human behaviour.

A new research paper by renowned Wits University archaeologist, Prof. Christopher Henshilwood, is the first detailed summary of the time periods he and a group of international researchers have been studying in South Africa: namely the Still Bay techno-traditions (c. 75 000 – 70 000 years) and the Howiesons Poort techno-tradition (c. 65 000 – 60 000 years).

The paper, entitled Late Pleistocene Techno-traditions in Southern Africa: A Review of the Still Bay and Howiesons Poort, c. 75 ka, has been published online in the Journal of World Prehistory on 6 November 2012.

Henshilwood says these periods were significant in the development of Homo sapiens behaviour in southern Africa. They were periods of many innovations including, for example, the first abstract art (engraved ochre and engraved ostrich eggshell); the first jewellery (shell beads); the first bone tools; the earliest use of the pressure flaking technique, that was used in combination with heating to make stone spear points and the first probable use of stone tipped arrows launched by bow.

"All of these innovations, plus many others we are just discovering, clearly show that Homo sapiens in southern Africa at that time were cognitively modern and behaving in many ways like ourselves. It is a good reason to be proud of our earliest, common ancestors who lived and evolved in South Africa and who later spread out into the rest of the world after about 60 000 years," says Henshilwood.

The research also addresses some of the nagging questions as to what drove our ancestors to develop these innovative technologies. According to Henshilwood answers to these questions are, in part, found in demography and climate change, particularly changing sea levels, which were major drivers of innovation and variability in material culture.

This paper is just the latest to come from Henshilwood and his teams' research on African archaeology that revolutionised the idea that modern human behaviour originated in Europe after about 40 000 years ago. There is increasing evidence for an African origin for behavioural and technological modernity more than 70 000 years ago and that the earliest origin of all Homo sapiens lies in Africa with a special focus in southern Africa.

Henshilwood writes: "In just the past decade our knowledge of Homo sapiens behaviour in the Middle Stone Age, and in particular of the Still Bay and Howiesons Poort, has expanded considerably. With the benefit of hindsight we may ironically conclude that the origins of 'Neanthropic Man', the epitome of behavioural modernity in Europe, lay after all in Africa."

EurekAlert. 2012. “Africa's Homo sapiens were the first techies”. EurekAlert. Posted: December 5, 2012. Available online:

Tuesday, December 25, 2012

Unwrapping the mummy – performance and science

Mummies have been objects of horror and fascination in popular culture since the early 1800s at least — over a century before Boris Karloff portrayed an ancient Egyptian searching for his lost love in the 1932 film “The Mummy.”

Public “unwrappings” of mummified human remains performed by both showmen and scientists heightened the fascination, but also helped develop the growing science of Egyptology.

Dr. Kathleen Sheppard an historian from the Missouri University of Science and Technology argues this point in her latest article entitled “Between Spectacle and Science: Margaret Murray and the Tomb of the Two Brothers” in the December issue of the journal Science in Context.

A public spectacle

While mummy unwrappings served as public spectacles that objectified exotic artefacts, they were also scientific investigations that sought to reveal medical and historical information about ancient life.

On Thursday, 7 May 1908, The Manchester Guardian reported the unveiling of human remains in the Chemistry Theatre at Manchester University. As the “peering collection” of men and women looked on:

“[the ancient mummy] Khnumu Nekht was bared of his wrappings and brought once more to the light of day. . . . Near the body the linen sheets had rotted, and they fell to pieces at a touch. The bones, however, were more or less perfect. There were traces of flesh on them. It was on the whole a gruesome business, and one or two people left early. (“Mummy of Khnumu Nekht” 1908)”
Margaret Alice Murray, was leading the “gruesome business” at the front of the theatre, wearing a white pinafore apron and her hair neatly pinned back.

A few notes survive from the unwrapping in the archives of the Manchester Museum. The only detailed report of the investigation, The Tomb of the Two Brothers, was published two years later and remains today one of the leading studies of the mummification processes and human remains of Middle Kingdom Egypt.

Educating the public

Sheppard says Egyptologist Margaret Murray, the first woman to publicly unwrap a mummy, sought to unravel the mysteries of ancient Egypt by exposing mummified human remains. She says Murray’s work is culturally significant because it is “poised between spectacle and science, drawing morbid public interest while also producing ground-breaking scientific work that continues to this day.”

“Public spectacles that displayed mummified remains as objects of curiosity date back to the 16th century and these types of spectacles were highly engaging shows in which people were, to a certain degree, educated about different aspects of science both by showmen and scientists.”

Many Egyptologists drew a distinction between “Egyptomania,” the fascination with all things Egypt, and “Egyptology,” the scientific study of Egyptian life, Sheppard says, but Murray had a different goal – involving the public in scientific inquiry with a goal of correcting popular misconceptions.

“Murray tried to get the public to see that mummies weren’t magical, they were just preserved human remains to be studied and learned from,” Sheppard explains. “In other words, rather than trying to separate the ‘mania’ from the ‘ology,’ she wanted to bring reason and understanding to the mania.”

Past Horizons. 2012. “Unwrapping the mummy – performance and science”. Past Horizons. Posted: December 4, 2012. Available online:

Monday, December 24, 2012

Drought May Have Killed Sumerian Language

A 200-year-long drought 4,200 years ago may have killed off the ancient Sumerian language, one geologist says.

Because no written accounts explicitly mention drought as the reason for the Sumerian demise, the conclusions rely on indirect clues. But several pieces of archaeological and geological evidence tie the gradual decline of the Sumerian civilization to a drought.

The findings, which were presented Monday (Dec. 3) here at the annual meeting of the American Geophysical Union, show how vulnerable human society may be to climate change, including human-caused change.

"This was not a single summer or winter, this was 200 to 300 years of drought," said Matt Konfirst, a geologist at the Byrd Polar Research Center.

Beginning about 3500 B.C., the Sumerian culture flourished in ancient Mesopotamia, which was located in present-day Iraq. Ancient Sumerians invented cuneiform writing, built the world's first wheel and arch, and wrote the first epic poem, "Gilgamesh."

But after 200 to 300 years of upheaval, the Sumerian culture disappeared around 4,000 years ago, and the Sumerian language went extinct soon after that.

Konfirst wanted to see if a drought that spanned about 200 years may have caused the decline. Several geological records point to a long period of drier weather in the Middle East around 4,200 years ago, Konfirst said. The Red Sea and the Dead Sea had increased evaporation; water levels dropped at Lake Van in Turkey, and cores from marine sediments around that period indicate increased dust in the environment.

"As we go into the 4,200-year-ago climate anomaly, we actually see that estimated rainfall decreases substantially in this region and the number of sites that are populated at this time period reduce substantially," he said.

Around the same time, 74 percent of the ancient Mesopotamian settlements were abandoned, according to a 2006 study of an archaeological site called Tell Leilan in Syria. The populated area also shrank by 93 percent, he said.

"People still live in this region. It's not that the collapse of a civilization means that an area is completely abandoned," he said. "But that there's a sharp change in the population."

During the great drought, two waves of marauding nomads descended upon the region, sacking the capital city of Ur. After around 2000 B.C., ancient Sumerian gradually died off as a spoken language in the region. For the next 2,000 years, the tongue lingered on as a dead written language, similar to Latin in the Middle Ages, but has been completely extinct since then, Konfirst said.

The coincidence of the social upheaval, depopulation in the area and the geologic record of drought suggests climate change might have played a role in the loss of the Sumerian language, Konfirst said.

The findings also suggest that modern-day civilizations may be vulnerable to climate change, he said.

Ghose, Tia. 2012. “Drought May Have Killed Sumerian Language”. Live Science. Posted: December 4, 2012. Available online:

Sunday, December 23, 2012

The search is on to find the faker behind Piltdown Man

One hundred years ago, the world was introduced to Piltdown Man, seemingly one of the most remarkable discoveries of the age – which turned out to be perhaps the most audacious and brilliant fraud in scientific history. The bones and stones found in a gravel pit near Piltdown Common in Sussex, shortly before the First World War, have been exposed as fakes for more than half a century, yet even now we do not know the perpetrator’s identity or motive. Now research is under way that may at last reveal the identity of the forger or forgers, using a barrage of the latest forensic techniques.

Today, the assembly of chocolate-brown objects known as “Piltdown Man” – skull and jaw fragments, mammal fossils and flint tools – resides in a safe in the Natural History Museum. But when it was revealed to the world at a meeting of the Geological Society of London on December 18 1912, it caused a sensation: this evidence of the world’s earliest known human, together with the tools he made and the animals he hunted, appeared to be no less than the evolutionary “missing link” between man and ape that Charles Darwin had predicted.

The discovery had been made by a Sussex solicitor and amateur antiquarian, Charles Dawson, who brought his finds to his friend Dr Arthur Smith Woodward, the Keeper of Geology at the Natural History Museum. Although Smith Woodward’s expertise was in fossil fish, he staked his – and the museum’s – reputation on the find, which he named in honour of his friend: the Dawn Man of Dawson, or Eoanthropus dawsoni.

When they made their presentation at that crowded meeting at the Geological Society, the atmosphere was intense. By the beginning of the 20th century, scientists had been obsessed by the origins of humanity for 50 years, but fossil finds were scarce. While samples of early man’s remains had been discovered at sites in France, Germany, Belgium and Java – some of great antiquity – nothing comparable had been found in Britain.

So when Dawson and Smith Woodward announced their discovery, the excitement was immense: the earliest known human was British! Some of the world’s greatest scientists agreed that the discovery was probably “of equal, if not of greater consequence” than any fossils yet discovered here or abroad.

Piltdown Man, with his human-like braincase and very apelike jaw, was exactly what the experts expected our ancestors to look like. In 1914, a large bone implement was found at the site, and it too was unique. Fashioned from the leg bone of an extinct elephant, it had a flat blade at one end, a point at the other. What no one could work out was what it was for. Reginald Smith, an antiquarian at the British Museum, voiced what everyone was probably thinking: he “could not imagine”, he said, “any use for an implement that looked like part of a cricket bat”.

Still, taken together these finds created an extraordinary picture. Here was a fossil man, with his tools and contemporary animals, found in gravel that was thought to date from the early Ice Age, between 500,000 and 1 million years ago. Smith Woodward even visualised his everyday life, writing almost affectionately of what he ate, how he cooked, what he wore. “If he could be seen alive, he would be recognised at once as human,” he wrote in 1934, “but his stooping, stumpy, shuffling form would betray his lowly grade.”

Yet even in 1912, there were dissenting voices. Doubts were raised whether that apelike jaw and human skull could be from the same individual – or even the same species. Perhaps jaw and skull had accidentally come together in the gravel pit. The timely discovery of a canine tooth in 1913, which looked almost exactly like Smith Woodward’s predictions, strengthened the case for Piltdown Man. Then, in 1915, with more questions being raised, came another discovery. At a second site, two miles away, Dawson found more skull fragments and a tooth, similar to the original specimens. This was clear evidence of a second Dawn Man – and the doubters were silenced.

Forty years later, with dating techniques having improved substantially, Piltdown Man was revealed to be a fraud. The skull was that of a modern human, a few centuries old; the jaw that of an orang-utan. It had been broken to hide identifying features and the teeth filed to resemble those of humans. In fact, almost every specimen in the collection had been stained to match the Piltdown gravels – and everything had been planted at the site.

The newspapers splashed on the scandal, and what everyone wanted to know was who had done it – and why? Experts fumed, rightly, that this was far from a harmless hoax: it was a malicious deception that had hijacked our understanding of human origins. Yet its provenance remained obscure. There was no incriminating fingerprint or confessional letter to convict a perpetrator; all the evidence was purely circumstantial.

With the centenary of the “discovery” approaching, a group of scientists – known unofficially as the Piltdowners – recently decided that it is time finally to expose the forger. Led by Professor Chris Stringer, research leader in human origins at the Natural History Museum, the team is re-analysing all the Piltdown material, using a range of sophisticated techniques from DNA and isotope analysis to a high-resolution microscope. With this forensic arsenal, every facet of the forgery will be exposed. It is hoped that an identifying “signature” of the perpetrator will emerge – or, at the very least, that we will learn how many hands were at work and where the various specimens came from.

What is already apparent is that the skills of the forger – or forgers – were vastly more sophisticated than had been suspected. For example, all the bones and stones appear to have been doctored to obscure any distinguishing characteristics. And the situation is complicated by the range of possible suspects. There is a long cast of characters associated with Piltdown – most of whom have been accused somewhere in the literature of committing the forgery. Sir Arthur Conan Doyle, the creator of Sherlock Holmes, lived near Piltdown, played at Piltdown Golf Club – where Dawson was secretary – and was known to have visited the site. The Jesuit philosopher, and palaeontologist Father Pierre Teilhard de Chardin, who worked on excavations alongside Dawson and Smith Woodward and discovered the all-important canine, has also been accused. Then again, so has Smith Woodward himself.

Of course, the likeliest suspect has always been Charles Dawson himself. It was he who first brought the fossils to Smith Woodward, who found most of the material, and who is the only source for the second Piltdown man. After he died after a short illness in 1916, at the age of 52, nothing more was found at the site – although that did not stop Smith Woodward continuing to excavate the Piltdown gravels until old age and blindness ended his futile labours.

For his part, Smith Woodward always described Dawson as an engaging, cheerful colleague. But many of those who knew him in Lewes, where he lived, not only had doubts about his character but also believed him capable of deception. It’s now known he forged many of his other archaeological finds What could his motive have been? Perhaps he wanted recognition, to be elected a Fellow of the Royal Society (his name was put forward in 1913, but it never happened). Yet all the evidence against him is circumstantial – and there remains the question of whether the solicitor would have had the skills to fool so many scientists for so long.

Soon, the Piltdowners will have answers – but the complexities these experts are uncovering demonstrate, a century on, just why the great Piltdown mystery has taken so long to crack.

Shindler, Karolyn. 2012. “The search is on to find the faker behind Piltdown Man”. The Telegraph. Posted: December 3, 2012. Available online:

Saturday, December 22, 2012

China unearths ruined palace near terracotta army

Excavations near Xi'an reveal vast ancient palace complex a quarter of the size of Beijing's Forbidden City

Archaeologists have found the remains of an ancient imperial palace near the tomb of emperor Qin Shi Huang, home of the famous terracotta army, China's state media reported on Sunday.

The palace is the largest complex discovered so far in the emperor's sprawling 22 square-mile (56 square-km) second-century BC mausoleum, which lies on the outskirts of Xi'an, an ancient capital city in central China, an associate researcher at the Shaanxi provincial institute of archaeology told China's official news wire Xinhua.

It is an estimated 690 metres long and 250 metres wide – about a quarter of the size of the Forbidden City in Beijing – and includes 18 courtyard-style houses with one main building at the centre, according to the researcher, Sun Weigang.

Sun called the palace a clear predecessor to the Forbidden City, which was occupied by emperors during the later Ming and Qing dynasties. Both were built on north-south axes in keeping with traditional Chinese cosmology.

Despite wars soon after Qin Shi Huang's death – and more than 2,000 years of exposure – the foundations are well preserved. Archaeologists have found walls, gates, stone roads, pottery sherds and some brickwork, according to Xinhua.

They have been excavating the foundations since 2010. Qin's tomb is guarded by an estimated 6,000 life-sized terracotta warriors, including remarkably well-preserved cavalrymen, chariots and horses, each one unique. They were first discovered in 1974 by workers digging a well. About 2,000 have been excavated; 110 of them were unearthed this summer.

The United Nations educational, scientific and cultural organisation (Unesco) declared the army a world heritage site in 1987.

Qin began designing the palace for his afterlife shortly after he became king of the Qin state, aged 13. The complex took 700,000 workers about 40 years to build and was completed two years after his death.

According to writings by the Han dynasty scholar Sima Qian, Qin Shi Huang's tomb is 120 metres high, sealed off by a vermilion stone wall, surrounded by rivers of mercury and protected by booby traps. It has not been excavated for fear of damaging the potentially priceless artefacts inside.

Chinese historians portray Qin as a great unifier, who conquered six states and established an expansive feudal kingdom with a united currency and writing system. He is also known as a ruthless leader who burned books, buried opponents alive and castrated prisoners of war.

Kalman, Jonathan. 2012. “China unearths ruined palace near terracotta army”. The Guardian. Posted: December 3, 2012. Available online:

Thursday, December 20, 2012

Grave Robbers Seek Bones for Voodoo Rituals

Over 100 graves have been dug up in the West African country of Benin, looted by grave robbers seeking body parts for use in magic rituals.

According to a Reuters news story,

"The incident is the most serious case of grave-robbing in the West African state, the world capital of voodoo where most of the country's 9 million residents practice a benign form of the official religion. Authorities in Dangbo, a village 6 miles from the capital Porto-Novo, began an investigation after a mason working at the cemetery said he spotted several masked men digging up the graves, from which organs and skulls were removed. 'The desecration of graves is about money in this region,' said Joseph Afaton, director of the cemetery. 'It is for sacrifices, or for bewitching.'"

Many Americans only think of witches and witchcraft around Halloween. But in many countries belief in witches is common, and black magic is considered part of everyday life. In Africa, witch doctors are consulted not only for healing diseases, but also for placing (or removing) magic curses or bringing luck -- much like many psychics and fortunetellers in America.

A 2010 Gallup poll found that belief in magic is widespread throughout sub-Saharan Africa; on average 55 percent of Africans believe in witchcraft.

Though graves are the most common source of bones, organs and limbs, in the past few years albino children and adults in Africa have been attacked and killed for their body parts. The belief and practice of using body parts for magical ritual or benefit is called muti.

Muti hunting was featured in the 2009 South African science-fiction film "District 9," in which the hero's body parts were sought after by a local warlord who believed that the limbs would give him magical powers. Muti murders are particularly brutal, with knives and machetes used to cut and hack off limbs, breasts, and other body parts from their living victims. Many of the albinos were beheaded, their heads carefully collected and preserved as gruesome good luck charms or for use in rituals.

Throughout most of history, medical knowledge of anatomy was poor and indirect, partly because of fear and taboos against cutting open corpses. The Renaissance brought an emphasis on practical, real-world knowledge, which necessarily meant examining and cutting up the dead.

In Europe, the rise of early medical centers created a strong demand for dead bodies; a few cadavers were made available by royal decree, usually the bodies of condemned criminals. In the 1700s, in fact, dissection was a punishment for serious crimes.

By 1720, theft from graveyards was common in London, England, and grave robbers (or "resurrection men," as they were known) were making a profit digging up bodies and selling them to anatomists and doctors. Among the most infamous of these criminals were Irish grave robbers and murderers Brendan Burke and William Hare, who committed sixteen murders and sold the bodies to a well-known London anatomist in 1828.

By the 1900s most grave robbing in the West had ceased, though as the incidents in Benin demonstrate the practice lingers where belief in magic is common.

Radford, Benjamin. 2012. “Grave Robbers Seek Bones for Voodoo Rituals”. Discover News. Posted: December 3, 2012. Available online:

Wednesday, December 19, 2012

Origin of intelligence and mental illness linked to ancient genetic accident

Scientists have discovered for the first time how humans – and other mammals – have evolved to have intelligence

Scientists have discovered for the first time how humans – and other mammals – have evolved to have intelligence.

Researchers have identified the moment in history when the genes that enabled us to think and reason evolved.

This point 500 million years ago provided our ability to learn complex skills, analyse situations and have flexibility in the way in which we think.

Professor Seth Grant, of the University of Edinburgh, who led the research, said: "One of the greatest scientific problems is to explain how intelligence and complex behaviours arose during evolution."

The research, which is detailed in two papers in Nature Neuroscience, also shows a direct link between the evolution of behaviour and the origins of brain diseases.

Scientists believe that the same genes that improved our mental capacity are also responsible for a number of brain disorders.

"This ground breaking work has implications for how we understand the emergence of psychiatric disorders and will offer new avenues for the development of new treatments," said John Williams, Head of Neuroscience and Mental Health at the Wellcome Trust, one of the study funders.

The study shows that intelligence in humans developed as the result of an increase in the number of brain genes in our evolutionary ancestors.

The researchers suggest that a simple invertebrate animal living in the sea 500 million years ago experienced a 'genetic accident', which resulted in extra copies of these genes being made.

This animal's descendants benefited from these extra genes, leading to behaviourally sophisticated vertebrates – including humans.

The research team studied the mental abilities of mice and humans, using comparative tasks that involved identifying objects on touch-screen computers.

Researchers then combined results of these behavioural tests with information from the genetic codes of various species to work out when different behaviours evolved.

They found that higher mental functions in humans and mice were controlled by the same genes.

The study also showed that when these genes were mutated or damaged, they impaired higher mental functions.

"Our work shows that the price of higher intelligence and more complex behaviours is more mental illness," said Professor Grant.

The researchers had previously shown that more than 100 childhood and adult brain diseases are caused by gene mutations.

"We can now apply genetics and behavioural testing to help patients with these diseases", said Dr Tim Bussey from Cambridge University, which was also involved in the study.

EurekAlert. 2012. “Origin of intelligence and mental illness linked to ancient genetic accident”. EurekAlert. Posted: December 2, 2012. Available online:

Tuesday, December 18, 2012

Native Americans and Northern Europeans More Closely Related Than Previously Thought

Using genetic analyses, scientists have discovered that Northern European populations -- including British, Scandinavians, French, and some Eastern Europeans -- descend from a mixture of two very different ancestral populations, and one of these populations is related to Native Americans. This discovery helps fill gaps in scientific understanding of both Native American and Northern European ancestry, while providing an explanation for some genetic similarities among what would otherwise seem to be very divergent groups.

This research was published in the November 2012 issue of the Genetics Society of America's journal Genetics.

According to Nick Patterson, first author of the report, "There is a genetic link between the paleolithic population of Europe and modern Native Americans. The evidence is that the population that crossed the Bering Strait from Siberia into the Americas more than 15,000 years ago was likely related to the ancient population of Europe."

To make this discovery, Patterson worked with Harvard Medical School Professor of Genetics David Reich and other colleagues to study DNA diversity, and found that one of these ancestral populations was the first farming population of Europe, whose DNA lives on today in relatively unmixed form in Sardinians and the people of the Basque Country, and in at least the Druze population in the Middle East. The other ancestral population is likely to have been the initial hunter-gathering population of Europe. These two populations were very different when they met. Today the hunter-gathering ancestral population of Europe appears to have its closest affinity to people in far Northeastern Siberia and Native Americans.

The statistical tools for analyzing population mixture were developed by Patterson and presented in a systematic way in the report. These tools are the same ones used in previous discoveries showing that Indian populations are admixed between two highly diverged ancestral populations and showing that Neanderthals contributed one to four percent of the ancestry of present-day Europeans. In addition, the paper releases a major new dataset that characterizes genetic diversity in 934 samples from 53 diverse worldwide populations.

"The human genome holds numerous secrets. Not only does it unlock important clues to cure human disease, it also reveal clues to our prehistoric past," said Mark Johnston, Editor-in-Chief of the journal Genetics. "This relationship between humans separated by the Atlantic Ocean reveals surprising features of the migration patterns of our ancestors, and reinforces the truth that all humans are closely related."

Science Daily. 2012. “Native Americans and Northern Europeans More Closely Related Than Previously Thought”. Science Daily. Posted: November 30, 2012. Available online:

Monday, December 17, 2012

Research in Southern India Provides a Sweet Look at Preservation of Ecological Knowledge

A well-known proverb states that, "It takes a village to raise a child." According to Kathryn Demps, it also takes a village to preserve ecological knowledge in upcoming generations.

Demps, A Boise State University visiting assistant professor in anthropology, studies behavioral and evolutionary ecology in small-scale societies. Her latest project looks at the honey-gathering Jenu Kuruba tribe in South India and how its cultural knowledge is being preserved, or lost, in our modern world.

"What we learn from others -- our culture, skills, values, beliefs and knowledge -- is passed through the generations," she said. "How it is passed down can change the body of knowledge."

Demps noted that in today's race toward homogenous societies, indigenous knowledge is being lost even faster than languages.

One example is medicinal specialists. In the 1980s, the first hospital in the area populated by the Jenu Kuruba opened; 1985 was the last year for a recorded medicinal ceremony. Gurus didn't train more apprentices because no one wanted to learn, and as a result, the knowledge has been completely lost in just one generation.

The Jenu Kuruba comprise a band of small communities located in the forested Kodagu District. For generations, young men from the tribe have collected wild honey by nimbly scaling massive trees. Because honey is in such high demand in the cities for its purported medicinal qualities, it fetches a high price and is an important part of the local economy.

But the skills needed to harvest this precious commodity are at risk of dying out. Several former honey-gathering communities were moved away from the forests in an attempt to create national parks, and those that remain (thanks to special rights from the government to live in the forest and collect non-timber products) are now sending children to school during the day, drastically affecting how, or even if, they learn necessary honey-gathering skills.

Prior to construction of the local school in the 1970s, children rarely received a formal, western education. While the average level of education is still low, most children are spending at least a few critical years learning new skills at the expense of traditional, indigenous knowledge.

"Kids need to learn how to climb trees and how to make big, smoky torches from sticks wrapped in green leaves," Demps said. "They have to learn to climb onto the branches and cut off the honeycomb. There also are ritual things like the honey-collecting song that is supposed to appease the bees and show brotherhood."

Locals learn at an early age to scale trees by shimmying 100 feet up the trunk, pressing their feet flat into the bark and using their arms to pull themselves ever higher. Young boys pick this up by playing a traditional climbing game called mara cothi, which means "tree monkey" and is similar to an arboreal version of tag. Because the climb can be so dangerous, young men often leave offerings at the base of a tree, asking for a blessing for a safe climb.

More skilled gatherers also need to know how to work with the bees, coaxing the queen into a new hive, gathering the honey or calming a troubled colony. This is especially important given that the largest honeybees in the region are massive compared to the average Idaho varieties, and pack a powerful sting.

Demps and fellow researchers are evaluating data collected from almost 200 local residents ages 6-65 in order to understand what residents know at various ages, and who they learned it from.

"If we know what people are learning, and how they are learning it, we can make recommendations that may remove conflicts affecting traditional knowledge," Demps said. "For instance, giving children just five or six days a month off of school can make a big difference. That has been shown to be enough time for children to learn the skills they need to collect honey but still learn western knowledge. They also can learn how to collect forest foods for better nutrition as they are out hunting and gathering, as well as medicinal knowledge and how to manage the environment so that they are less likely to deforest the area."

Demps has published two papers based on her research and is working on another based on traditional knowledge and schooling. She hopes to eventually write a book examining the tribe's traditional life ways that draws on various firsthand accounts over the past two centuries.

Science Daily. 2012. “Research in Southern India Provides a Sweet Look at Preservation of Ecological Knowledge”. Science Daily. Posted: November 30, 2012. Available online:

Sunday, December 16, 2012

Mayans Cooked Food With Clay Balls

Planning a last supper party on December 21? To celebrate the Mayan way, you might need several clay balls.

That's one way the Maya cooked their food, according to U.S. archaeologists who have unearthed dozens of rounded clay pieces from a site in Mexico.

Conducted with the Instituto Nacional de Antropología e Historia (INAH) and Millsaps College's financial support, the excavation of a kitchen at Escalera al Cielo in Yucatán revealed 77 complete balls and 912 smaller fragments.

About 1-2 inches in diameter and more than 1,000 years old, the clay balls contained microscopic pieces of maize, beans, squash and other root crops.

The finding supports the hypothesis that the balls "were involved in kitchen activities related to food processing," archaeologists Stephanie Simms, Francesco Berna, of Boston University, MA, and George Bey of Millsaps College, MS, wrote in the Journal of Archaeological Science.

"This is the first time fired clay balls have been studied in the Maya area and, to my knowledge, no one has documented the use of clay balls in modern Maya cooking," Simms told Discovery News. Located in the Puuc Maya hills of Yucatán, Escalera al Cielo was an elite residential settlement that was rapidly abandoned sometime near the end of the Terminal Classic period (800-950 A.D.), as shown by ceramic vessels, stone tools, personal adornments, and other materil assembled on the floors.

"We know much about the nature of ancient Maya kings and queens, but this type of study helps see how the Maya worked in the kitchen, what kinds of tools they used and the ways they might have prepared their cuisine," Bey, the project co-director along with archaeologist Tomás Gallareta Negrón and anthropologist William Ringle, told Discovery News.

To better understand the meaning of the fired clay balls, the researchers used a suite of microscopic techniques and experimental replication. The tests revealed that the balls were produced from local clay in a standardized set of sizes.

"They were fired at a fairly low temperature and were used repeatedly in the kitchen," Bey said.

Most likely, the fired clay balls were either placed directly into pots of food to cook or heat it, or used in pit (pib in Mayan) oven cooking installations.

"This cooking method involves digging a shallow pit, lining it with stones or clay balls, building a fire on top and waiting until it is reduced to embers," Simms said.

The process continued by placing whole roots, squash fruits or packets of food wrapped in maize on the hot stones. Everything was then covered with earth and leaves to seal in heat. Cooking took from one hour to up to a day or more.

The experimental tests showed "how the ancient Puuc Maya manipulated materials available to them to produce objects that potentially represent a staple of every Puuc Maya kitchen inventory, maybe even representing a local cooking technique and cuisine," Simms said.

Fired clay balls have been described from a variety of archaeological contexts worldwide, particularly in the Lower Mississippi River Basin and southeastern United States, and in areas of southwest Asia where clay is abundant but stone are not. Similar clay balls were also unearthed in the neolithic village of Catalhoyuk in Turkey, where they were found in hearths and interpreted as cooking or heating implements.

Charles Kolb, an anthropologist, archaeologist and senior program officer in the Division of Preservation and Access at the National Endowment for the Humanities, Washington DC, agrees that Bey and colleagues "have provided logical inferences of artifact use."

"The fired clay balls show multiple heating episodes rather than just one firing. A single firing might suggest the use as these balls as 'sling stones' or offensive weaponry, but their size would connote other uses," Kolb told Discovery News.

"The multiple firings of these balls points to uses in culinary activities with these fired clay balls substituting for stones," he added.

Lorenzi, Rossella. 2012. “Mayans Cooked Food With Clay Balls”. Discovery News. Posted: November 29, 2012. Available online:

Saturday, December 15, 2012

Genome of the Black Death Reveals Evidence for an Antique Bubonic Plague Pandemic

In a comparison of more than 300 contemporary strains of Yersinia pestis, the bacterium that causes bubonic plague, with ancient bacterial DNA isolated from victims of the Black Death (1347 -- 1351), a team led by researchers at University of Tuebingen obtained evidence suggestive of a bubonic plague outbreak in the late antique period (8th to 10th centuries AD). The study published online November 30 in PLoS ONE raises strong suspicion that the plague of Justinian, a massive pandemic that is thought to be in part responsible for the collapse of the East Roman Empire, may have been caused by the same bacterium implicated in the Black Death.

After the initial reconstruction of the complete medieval genome of Y.pestis from a Black Death cemetery in London last year, the researchers from the University of Tuebingen used a published genome wide dataset from more than 300 modern Y.pestis strains to reconstruct the relationship of ancient and modern plague bacteria. Due to the well-established age of the ancient remains they were able to date major radiation events in the history of this pathogen that are likely linked to major pandemics in the human population.

The comparison of modern and ancient genomes revealed that of the 311 Y.pestis strains analyzed, 275 trace their ancestry back to the medieval Black Death pandemic in the mid of the 14th century, confirming a previous analysis of 21 complete plague genomes by the same authors in 2011. In the new larger dataset, however, the authors identified an additional cluster of 11 contemporary bacterial strains that branch in the Y.pestis phylogeny between the 7th and 10th centuries, thus suggesting a radiation event of Y.pestis bacteria during a major outbreak. This time period roughly coincides with the Justinian plague, which historical sources suggest took place between the 6th and 8th centuries AD.

Historians have long suspected that the plague of Justinian was a pandemic of bubonic plague but until now little empirical evidence existed. The suggestion that this pandemic was likely also caused by bubonic plague was rather unexpected for the researchers as their previous analysis published in 2011 revealed no evidence for major outbreaks of bubonic plague before the Black Death. "Our new analysis implies that bubonic plague may have been a major killer already in the late Roman Empire." explains Krause, a Juniorprofessor at the University of Tuebingen specializing in Palaeogenetics. "The plague of Justinian seems like the best candidate for this earlier pandemic."

Science Daily. 2012. “Genome of the Black Death Reveals Evidence for an Antique Bubonic Plague Pandemic”. Science Daily. Posted: November 29, 2012. Available online:

Friday, December 14, 2012

Body language, not facial expressions, broadcasts what's happening to us

If you think that you can judge by examining someone's facial expressions if he has just hit the jackpot in the lottery or lost everything in the stock market -- think again. Researchers at the Hebrew University of Jerusalem and at New York University and Princeton University have discovered that -- despite what leading theoretical models and conventional wisdom might indicate -- it just doesn't work that way.

Rather, they found that body language provides a better cue in trying to judge whether an observed subject has undergone strong positive or negative experiences.

In a study published this week in the journal Science, the researchers present data showing that viewers in test groups were baffled when shown photographs of people who were undergoing real-life, highly intense positive and negative experiences. When the viewers were asked to judge the emotional valences of the faces they were shown (that is, the positivity or negativity of the faces), their guesses fell within the realm of chance.

The study was led by Dr. Hillel Aviezer of the Psychology Department of the Hebrew University, together with Dr. Yaacov Trope of New York University and Dr. Alexander Todorov of Princeton University.

In setting out to test the perception of highly intense faces, the researchers presented test groups with photos of dozens of highly intense facial expressions in a variety of real-life emotional situations. For example, in one study they compared emotional expressions of professional tennis players winning or losing a point. These pictures are ideal because the stakes in such games are extremely high from an economic and prestige perspective.

To pinpoint how people recognize such images, Aviezer and his colleagues showed different versions of the pictures to three groups of participants: 1) the full picture with the face and body; 2) the body with the face removed; and 3) the face with the body removed. Remarkably, participants could easily tell apart the losers from winners when they rated the full picture or the body alone, but they were at chance level when rating the face alone.

Ironically, the participants who viewed the full image (face and body) were convinced that it was the face that revealed the emotional impact, not the body. The authors named this effect "illusory valence," reflecting the fact that participants said they saw clear valence (that is, either positive or negative emotion) in what was objectively a non-diagnostic face.

In an additional study, Aviezer and his collaborators asked viewers to examine a more broad range of real-life intense faces. These included intense positive situations, such as joy (seeing one's house after a lavish makeover), pleasure (experiencing an orgasm), and victory (winning a critical tennis point), as well as negative situations, such as grief (reacting at a funeral), pain (undergoing a nipple/naval piercing), and defeat (losing a critical tennis point).

Again, viewers were unable to tell apart the faces occurring in positive vs. negative situations. To further demonstrate how ambiguous these intense faces are, the researchers "planted" faces on bodies expressing positive or negative emotion. Sure enough, the emotional valence of the same face on different bodies was determined by the body, flipping from positive to negative depending on the body with which they appeared.

"These results show that when emotions become extremely intense, the difference between positive and negative facial expression blurs," says Aviezer. "The findings, challenge classic behavioral models in neuroscience, social psychology and economics, in which the distinct poles of positive and negative valence do not converge."

Aviezer adds: "From a practical-clinical perspective, the results may help researchers understand how body/face expressions interact during emotional situations. For example, individuals with autism may fail to recognize facial expressions, but perhaps if trained to process important body cues, their performance may significantly improve."

EurekAlert. 2012. “Body language, not facial expressions, broadcasts what's happening to us”. EurekAlert. Posted: November 29, 2012. Available online:

Thursday, December 13, 2012

You can see the amazing cards here.

Call it a card player's dream. A complete set of 52 silver playing cards gilded in gold and dating back 400 years has been discovered.

Created in Germany around 1616, the cards were engraved by a man named Michael Frömmer, who created at least one other set of silver cards. According to a story, backed up by a 19th-century brass plate, the cards were at one point owned by a Portuguese princess who fled the country, cards in hand, after Napoleon's armies invaded in 1807.

At the time they were created in 1616 no standardized cards existed; different parts of Europe had their own card styles. This particular set uses a suit seen in Italy, with swords, coins, batons and cups in values from ace to 10. Each of these suits has three face cards — king, knight (also known as cavalier) and knave. There are no jokers.

In 2010, the playing cards were first put on auction by an anonymous family at Christie's auction house in New York. Purchased by entrepreneur Selim Zilkha, the cards were recently described by Timothy Schroder, a historian with expertise in gold and silver decorative arts, in his book "Renaissance and Baroque Silver, Mounted Porcelain and Ruby Glass from the Zilkha Collection"(Paul Holberton Publishing, 2012).

"Silver cards were exceptional," Schroder writes. "They were not made for playing with but as works of art for the collector's cabinet, or Kunstkammer." Today, few survive. "[O]nly five sets of silver cards are known today and of these only one — the Zilkha set — is complete."

On the cards, two of the kings are depicted wearing ancient Roman clothing while one is depicted as a Holy Roman Emperor and another is dressed up as a Sultan, with clothing seen in the Middle East. . The knights and knaves are depicted in different poses wearing (then-contemporary) Renaissance military or courtly costumes. Each card is about 3.4 inches by 2 inches (8.6 centimeters by 5 centimeters) in size and blank on the back.

Gilding with mercury

Creating the card set would have been a hazardous job. For the gilding, its designers used mercury, a poisonous substance that can potentially kill.

"You ground up gold into kind of a dust, and you mix it with mercury, and you painted that onto the surface where you wished the gilding to appear," Schroder told LiveScience in an interview. The mercury gets burned off in a kiln, a process "that would leave the gold chemically bonded to the silver."

The process is illegal today, he noted, and even in Renaissance times, it was known to be hazardous. "I don't think they quite understood why it was dangerous, but they did appreciate the dangers of it," Schroder said.

A gift from a princess?

The owner of the 17th-century card set is not known. However, according to a tradition detailed by the anonymous family who sold it, in the early 19thcentury, the cards were in the possession of Infanta Carlota Joaquina, a daughter of a Spanish king, who was married to a prince in Portugal. She fled to Brazil when Napoleon's armies marched into Iberia in 1807, apparently taking the silver cards with her.

After Napoleon forced her brother, Ferdinand VII, to abdicate the throne of Spain, she made several attempts to take over the Spanish crown and control the country's holdings in the New World. According to the family tradition, she gave the card set to the wife of Felipe Contucci, a man who helped in her efforts.

While this story cannot be proven, Schroder said he has "very little reason to doubt it." He added that "when the cards were acquired by Mr. Zilkha, they came in an early 19th-century leather box which had a brass plate in them, which also appeared to date from the early or middle of the 19th century, with this provenance engraved on it."

Contucci's plot

Spain still controlled a vast empire in the New World at the time of Napoleon's invasion. Among its territories was the viceroyalty of the Rio de la Plata, a large swath of land centred in Buenos Aires (in modern-day Argentina).

In November 1808, Contucci was in contact with leaders in Buenos Aires, according to a conference paper presented last February by Anthony McFarlane, a professor at the University of Warwick. Contucci told the princess they had made her an offer that would see her gain control of a new kingdom in South America.

McFarlane writes that "Contucci raised her hopes by informing in mid-November 1808 that 124 leading men were ready to support a military intervention by a military force led by the Infante Pedro Carlos [a relative of the princess] and supported by Admiral Smith [of Britain], to install her (as) the constitutional monarch of an independent kingdom."

However, this plan was foiled when government officials from Portugal, Spain and Britain all objected to it.

Then, in August 1809, the Spanish ambassador arrived in Rio with instructions from the Junta Central (the Spanish government not controlled by Napoleon), "to prevent Carlota from entering Spanish territory and to deflect her ambitions to become Regent," writes McFarlane.

Carlota's dream of becoming a ruling queen was simply not in the cards.

Jarus, Owen. 2012. “400-Year-Old Playing Cards Reveal Royal Secret”. Live Science. Posted: November 29, 2012. Available online:

Wednesday, December 12, 2012

Anthropological expertise facilitates multicultural women's health care

Collaboration between medical and anthropological expertise can solve complex clinical problems in today's multicultural women's healthcare, shows Pauline Binder, a medical anthropologist, who will present her thesis on 1 December at the Faculty of Medicine, Uppsala University, Sweden.

Pauline Binder has applied in-depth medical anthropological research approaches to understand clinical problems in ways not possible using only statistics. Why pregnant Somali women have an increased risk of complications even after migration has been the starting point for her fieldwork. She has elaborated why misunderstandings in the maternity care encounter might occur, which could lead to Somali women declining important obstetric interventions, such as emergency caesarean section.

"Maternity caregivers appear to perceive this decision-making as a culture-bound phenomenon and not as something that can directly affect women's health. Culture is seen as a private matter, and therefore does not encourage the development of treatment programs even if declining treatment can be harmful to both mother and baby," says Binder, medical anthropologist, and PhD candidate at the Department of Women and Child Health, Faculty of Medicine.

Her studies show that the Somali women's fears appear to stem from previous experiences from their country of origin, where cesarean section is associated with life-threatening complications. Maternal death is a reality for many of immigrant women in European countries, which can encourage a rational, and yet different by western standards, conceptualization of preventive risk.

Clinicians may use a language interpreter without recognition of women's private socio-cultural experiences, which can inhibit open dialog during the care encounter. They may also presume that Somali women only wish to meet female staff. The resulting misconceptions can lead to frustration among caregivers, and ultimately to a lack of trust and communication during the mutual care encounter. To avoid misunderstandings of this type – given the increased emphasis for clinicians to spend more time with clients during the medical consultation – it is essential to promote a consultation arena with two experts in the room: the woman and the doctor/midwife.

"My studies show that Somali women have as a first priority a need for competent and safe care, just as the majority of all pregnant women. Optimal interpreter use is a key ingredient," she says.

Binder also shows that Somali parents' childbearing roles have changed after migration. Interviews with Somali fathers indicate a welcomed engagement during their wives' pregnancy health checkups and supportive care – in a way that was unthinkable in Somalia. Childbearing decision-making is now shared, including the mutual decision to abandon traditions such as circumcision of daughters. This example suggests that deeply -rooted traditions can change after migration.

The thesis shows that the influence of cultural traditions and social norms, both among maternity caregivers and care-seeking women, on complex clinical problems can be better understood thanks to collaboration between medical and anthropological expertise. The work was conducted in collaboration with Associate Professor Birgitta Essén, Uppsala University, and Professor Sara Johnsdotter, Malmö University.

EurekAlert. 2012. “Anthropological expertise facilitates multicultural women's health care”. EurekAlert. Posted: November 28, 2012. Available online: Journal Reference:

Binder P. The Maternal Effect Migration: Exploring maternal healthcare in Diaspora using qualitative proxies for medical anthropology. Uppsala: Acta Universitatis Upsaliensis, 2012.

Tuesday, December 11, 2012

Catalan: a language that has survived against the odds

Repressed over the centuries by conquering powers, Catalan is now spoken by 9 million people

Catalan is not, as some believe, a dialect of Spanish, but a language that developed independently out of the vulgar Latin spoken by the Romans who colonised the Tarragona area. It is spoken by 9 million people in Catalonia, Valencia, the Balearic Isles, Andorra and the town of Alghero in Sardinia.

Variants of Catalan are spoken in Valencia and the Balearics, which were taken back from the Moors in the 13th century. According to Professor Albert Rossich of the University of Girona (Gerona) these variants reflect the origin of the people who repopulated these areas when the Moors were driven out. Valencia was repopulated with people from Lleida and Tortosa; the Baleares with people from Barcelona and l'Empordà in the north.

Catalonia had been an autonomous province within the kingdom of Aragón but when Aragón was united with Castile with the marriage of Ferdinand and Isabella, Castilian – ie Spanish – became the language of court and literature, while Catalan remained the popular tongue. When in 1714 Barcelona fell to Spanish troops led by the Earl of Berwick, Catalonia lost its autonomy, the central government imposed restrictions on the use of Catalan and Spanish became the official language.

It wasn't until the 19th century and the rise of the nationalist cultural movement known as the renaixença that Catalan was revived as a literary language, Rossich says. However, this revival was short-lived. The fascist regime that emerged triumphant from the civil war in 1939 did everything in its power to stamp out the official and private use of Catalan. Harsh penalties were imposed for speaking it.

The arrival of hundreds of thousands of immigrants from Spain's impoverished south further consolidated the use of Spanish as the lingua franca of Catalonia. Most of these immigrants, or their children at least, have come to understand and or speak Catalan since democracy was restored in 1978. However, large-scale immigration from Latin America over the past 10 years means just over half the Catalan population claim Spanish as their mother tongue.

Since the early 1980s, the imposition of a system known as "immersion," with Catalan as the only vehicular language in state schools, has guaranteed everyone educated in the past 30 years has a command of it. However, thanks to the presence of Spanish in daily life and the media, virtually all Catalans are perfectly bilingual.

Burgen, Stephen. 2012. “Catalan: a language that has survived against the odds”. The Guardian. Posted: November 22, 2012. Available online:

Monday, December 10, 2012

Mexican silver made it into English coins

Chemical tests of currency help reveal where New World riches flowed

Chemical studies of old English coins are helping unravel a centuries-old mystery: What happened to all the silver that Spaniards dug out of the New World?

Silver from Mexican mines started being incorporated into English coins around the mid-1550s, a new study shows. But silver from the legendary Potosí mines, in what is now Bolivia, didn’t show up until nearly a century later, researchers report online November 6 in Geology.

The new study adds hard data to theories linking the transatlantic influx of silver to price inflation across Europe from about 1515 to 1650.

Minerals such as gold and silver contain a chemical fingerprint of where they were born, for instance in the composition of copper and lead that appear along with the more precious metals. Anne-Marie Desaulty and Francis Albarède of the Ecole Normale Supérieure in Lyon, France, analyzed 15 English coins, dated between 1317 and 1640, for variations in their copper, lead and silver.

Lead in all the coins before the reign of Mary I, which began in 1553, showed that the ore was at least 220 million years old, suggesting it came from ancient rocks in either central Europe or England. Lead in later coins showed a much higher contribution of silver younger than 50 million years old — suggesting it came instead from the mines of Mexico.

The coins show very little hint of Potosí silver, which has a distinctly different lead signature than Mexican ore. That’s surprising, Desaulty says, because the Potosí mines were churning out far more silver at the time than Mexico was.

Geography may explain this, she says: It was easier to ship Mexican silver eastward to Europe than to get Potosí silver across the breadth of Brazil. Instead, Potosí silver went west, from Lima to Acapulco and onward to markets in China.

Scholars have known of this westward trade route before, which probably didn’t become really important until the early 17th century, says John Munro, an economist at the University of Toronto.

It’s not yet clear whether these particular English coins reflect a larger trend across Europe. In work published last year, Desaulty analyzed Spanish coins and found that they contained very little silver from either Mexico or Potosí until the 18th century. But Maria Filomena Guerra, of the Centre de Recherche et de Restauration des Musées de France in Paris, has used a different technique to analyze chemical elements that appear in trace amounts in Spanish, French and Italian coins. She found Potosí silver reaching Spain around 1570, and France and Italy in 1575.

“Spain receives the silver directly from Potosí,” Guerra says, “so it is evident it must reach Spain before the other countries.”

Witze, Alexandra. 2012. “Mexican silver made it into English coins”. Science News. Posted: November 20, 2012. Available online:


A.-M. Desaulty and F. Albarede. Copper, lead and silver isotopes solve a major economic conundrum of Tudor and early Stuart Europe. Geology. Published online November 6, 2012. doi:10.1130/G33555.1. Go to
A.-M. Desaulty et al. Isotopic Ag-Cu-Pb record of silver circulation through 16th-18th century Spain. Proceedings of the National Academy of Sciences. Vol. 108, May 31, 2011, p. 9002. doi:10.1073/pnas.1018210108. Go to
M.F. Guerra. The mines of Potosí: a silver Eldorado for the European economy. In Ion beam study of art and archaeological objects. European Commission, 2000.

Sunday, December 9, 2012

Scottish dig unearths '10,000-year-old home' at Echline

The remains of what is believed to be one of Scotland's earliest homes have been uncovered during construction works for the new Forth crossing.

The site dates from the Mesolithic period, about 10,000 years ago.

Archaeological excavation works have been taking place in a field at Echline in South Queensferry in preparation for the Forth Replacement Crossing.

A large oval pit nearly 7m in length is all that remains of the dwelling, along with hearths, flint and arrowheads.

'First settlers'

Rod McCullagh, a senior archaeologist at Historic Scotland, said: "This discovery and, especially the information from the laboratory analyses adds valuable information to our understanding of a small but growing list of buildings erected by Scotland's first settlers after the last glaciation, 10,000 years ago.

"The radiocarbon dates that have been taken from this site show it to be the oldest of its type found in Scotland which adds to its significance."

The remains feature a number of postholes which would have held wooden posts to support the walls and roof, probably covered with turf.

Several internal fireplace hearths were also identified and more than 1,000 flint artefacts were found, including materials which would have been used as tools and arrowheads.

Other discoveries included large quantities of charred hazelnut shells, suggesting they were an important source of food for the occupants of the house.

Archaeologists believe the dwelling would have been occupied on a seasonal basis, probably during the winter months, rather than all year round.

Ed Bailey, project manager for Headland Archaeology, the company that carried out the excavation works, said: "The discovery of this previously unknown and rare type of site has provided us with a unique opportunity to further develop our understanding of how early prehistoric people lived along the Forth.

"Specialist analysis of archaeological and palaeoenvironmental evidence recovered in the field is ongoing. This will allow us to put the pieces together and build a detailed picture of Mesolithic lifestyle."

Transport Minister Keith Brown said: "This ancient dwelling, which was unearthed as part of the routine investigations undertaken prior to construction works, is an important and exciting discovery.

"We now have vital records of the findings which we will be able to share to help inform our understanding of a period in Scotland's ancient history."

BBC News. 2012. “Scottish dig unearths '10,000-year-old home' at Echline”. BBC News. Posted: November 18, 2012. Available online:

Saturday, December 8, 2012

Mercury poisoning ruled out as cause of Tycho Brahe's death

In 2010, Tycho Brahe was exhumed from his grave in Prague, an event which received extensive international media coverage. Since then, a Danish-Czech team of researchers has been working to elucidate the cause of Tycho Brahe's death. The results of this intensive work now make it possible to rule out mercury poisoning as a cause of death.

For over four hundred years, Tycho Brahe's untimely death has been a mystery. He died on 24 October 1601 only eleven days after the onset of a sudden illness. Over the centuries, a variety of myths and theories about his death have arisen.

One of the most persistent theories has been that he died of mercury poisoning, either because he voluntarily ingested large quantities of mercury for medicinal purposes, or because mercury was used to poison him.

Rumours of death by poisoning arose shortly after Tycho Brahe's death. Brahe's famous assistant Johannes Kepler has been identified as a possible murder suspect, and other candidates have been singled out for suspicion throughout the years, according to Dr Jens Vellev, an archaeologist at Aarhus University in Denmark who is heading the research project.

The mercury poisoning theory has received apparent corroboration from repeated tests of the well-preserved remains of Tycho Brahe's beard which were removed from the grave when his body was exhumed for the first time in 1901.

'To definitively prove or disprove these much debated theories, we took samples from Tycho Brahe's beard, bones and teeth when we exhumed his remains in 2010. While our analyses of his teeth are not yet complete, the scientific analyses of Tycho Brahe's bones and beard are,' explains Dr Vellev.

Normal concentrations of mercury

The levels of mercury in Tycho Brahe's beard were investigated by Dr Kaare Lund Rasmussen, associate professor of chemistry at the University of Southern Denmark and Dr Jan Kučera, professor of nuclear chemistry at the Nuclear Physics Institute in Prague.

'We measured the concentration of mercury using three different quantitative chemical methods in our labs in Odense and Řež, and all tests revealed the same result: that mercury concentrations were not sufficiently high to have caused his death,' says Dr Rasmussen.

'In fact, chemical analyses of the bones indicate that Tycho Brahe was not exposed to an abnormally high mercury load in the last five to ten years of his life,' continues Dr Rasmussen, who analysed the bone samples using cold vapour atomic absorption spectroscopy at the University of Southern Denmark.

'Analyses of hairs from the beard were performed using radiochemical neutron activation analysis and proton microprobe scanning in Řež. They reflect the mercury load in the last approximately eight weeks of Tycho Brahe's life, and these analyses show that mercury concentrations fell from the high end of the normal level eight weeks before death to the low end of the normal level in the last two weeks before death,' explains Dr Kučera.

The "silver nose" that wasn't

In addition to his beard, another central element of the Tycho Brahe myth has been subjected to quantitative analysis: his famous artificial nose. Tycho Brahe lost part of his nose in a duel in 1566. According to tradition, the prosthetic nose he wore for the rest of his life was made of silver and gold.

When Tycho Brahe's grave was opened for the first time in 1901, his nose prosthesis was not found, but there were greenish stains around the nasal region - traces left by the prosthesis.

'When we exhumed the body in 2010, we took a small bone sample from the nose so that we could examine its chemical composition. Surprisingly, our analyses revealed that the prosthesis was not made of precious metals, as was previously supposed. The green colouration turned out to contain traces of equal parts copper and zinc, which indicates that the prosthesis was made of brass. So Tycho Brahe's famous "silver nose" wasn't made of silver after all,' explains Dr Vellev.

The reconstruction of Tycho Brahe's face

Researchers also took advantage of the opportunity to perform a CT- scanning Tycho Brahe's skeleton while they had access to his remains in 2010. The researcher team hopes to be able to reconstruct Tycho Brahe's face on the basis of the scanning and their analyses.

Upcoming TV programme on the death of Tycho Brahe

A team of film-makers from the Danish Broadcasting Corporation (DR) has followed the entire project closely, from Jens Vellev's fight to win permission for the exhumation from the authorities in Prague to the analysis of Tycho Brahe's remains and the publication of research results. The documentary Mysteriet om Tycho Brahes død (The Mysterious Death of Tycho Brahe) will be broadcast by DR on Sunday 18 November 7 pm. The film is a DR production in collaboration with Swedish and Czech television with support from Nordvision. American and German TV channels have already expressed interest in the documentary.

EurekAlert. 2012. “Mercury poisoning ruled out as cause of Tycho Brahe's death”. EurekAlert. Posted: November 15, 2012. Available online:

Friday, December 7, 2012

Four Family Cultures of America Identified

Four types of family cultures -- the Faithful, the Engaged Progressives, the Detached and the American Dreamers -- are molding the next generation of Americans, a three-year study by the University of Virginia's Institute for Advanced Studies in Culture finds.

The project findings are being released November 15 at a national conference in Washington, D.C.

Each type represents a complex configuration of moral beliefs, values and dispositions -- often implicit and rarely articulated in daily life -- largely independent of basic demographic factors, such as race, ethnicity and social class, the "Culture of American Families" study reports.

Most parenting research of the past 30 years, which undergirds notions of "tiger mothers" and "helicopter parents," has been based in psychology and focused on parenting styles, said project co-director James Davison Hunter, LaBrosse-Levinson Distinguished Professor of Religion, Culture and Social Theory and executive director of the institute.

This study, funded by an $850,000 grant from the John Templeton Foundation, goes beyond parenting styles "to tell the complex story of parents' habits, dispositions, hopes, fears, assumptions and expectations for their children," Hunter said.

"Though largely invisible, these family cultures are powerful, constituting the worlds that children are raised in, and may well be more consequential than parenting styles," he said.

The report is based on data collected in two stages from September 2011 through March 2012, explained project co-director Carl Desportes Bowman, director of survey research at the institute.

First, a nationally representative sample of 3,000 parents of school-aged children completed an online one-hour survey. Then follow-up, in-person interviews were conducted with 101 of the survey respondents. The 90-minute interviews complemented the survey with open-ended questions designed to elicit parents' implicit and explicit strategies and assumptions.

The many factors that make up family cultures were distilled using the statistical technique of data cluster analysis to reveal four different family culture types:

The Faithful

The Faithful (20 percent of American parents) adhere to a divine and timeless morality, handed down through Christianity, Judaism or Islam, giving them a strong sense of right and wrong. Understanding human nature as "basically sinful" and seeing moral decline in the larger society, including in the public schools, the Faithful seek to defend and multiply the traditional social and moral order by creating it within their homes and instilling it in their children, with support from their church community. Raising "children whose lives reflect God's purpose" is a more important parenting goal than their children's eventual happiness or career success.

Engaged Progressives

For Engaged Progressives (21 percent of parents), morality centers around personal freedom and responsibility. Having sidelined God as morality's author, Engaged Progressives see few moral absolutes beyond the Golden Rule. They value honesty, are skeptical about religion and are often guided morally by their own personal experience or what "feels right" to them. Politically liberal and the least religious of all family types, they are generally optimistic about today's culture and their children's prospects. Aiming to train their children to be "responsible choosers," Engaged Progressives strategically allow their children freedom at younger ages than other parents. By age 14, their children have complete information about birth control, by 15 they are surfing the Web without adult supervision, and by age 16 they are watching R-rated movies.

The Detached

The parenting strategy of The Detached (21 percent of parents) can be summarized as: Let kids be kids and let the cards fall where they may. The Detached are primarily white parents with blue-collar jobs, no college degree and lower household income. Pessimistic about the future and their children's opportunities, they report lower levels of marital happiness, and do not feel particularly close to their children. They feel they are in a "losing battle with all the other influences out there" and it shows in their practices. They spend less than two hours a day interacting with their children, they do not routinely monitor their children's homework, and they report lower grades for their children. When they do have dinner together as a family it is often in front of the TV.

American Dreamers

American Dreamers (27 percent of parents) are defined by their optimism about their children's abilities and opportunities. These parents, with relatively low household income and education, pour themselves into raising their children and providing them every possible material and social advantage. They also invest much effort protecting them from negative social influences and shaping their children's moral character. This is the most common family culture among blacks and Hispanics, with each group making up about a quarter of American Dreamers. American Dreamers describe their relationships with their children as "very close" and express a strong desire to be "best friends" with their children once they are grown.

The study also identified a number of major trends in parenting and family culture. Contrary to much popular discussion of "the death of character," American parents of all stripes want their children to become loving, honest and responsible adults of high moral character.

Despite a widespread perception among parents that American family life has declined since they were growing up, parents report that their own families and children are doing very well. Unlike many parents in the 1960s who faced a "generation gap," today's parents believe their children largely share their values. Most family arguments and strife center around mundane, day-to-day issues like doing chores.

Many parents are less confident in authoritarian forms of discipline, so they turn to constant communication and close relationships to influence their children. Parents walk the fine line of wanting to be strict, but also wanting to be close friends and confidants of their children.

While parents worry about all sorts of challenges to their children's development and vitality, they are unlikely to identify their own children as struggling with such challenges, including obesity, below-average academic performance, drugs or alcohol, or other risky behaviors. This "not my kids" reality gap may be linked to parental closeness and identification with their children.

Most parents are effectively "going it alone," reporting a very thin support network. Many parents feel helpless to keep negative external influences at bay as children gain ever-increasing exposure and access to the Internet, on-demand movies, Facebook and other technologies.

Science Daily. 2012. “Four Family Cultures of America Identified”. Science Daily. Posted: November 15, 2012. Available online: