We are searching data for your request:
Upon completion, a link will appear to access the found materials.
Israeli scientists have found evidence that early humans thought ahead and stored fat and marrow laden animals bones for rainy days.
According to a new study published in Science Advances the early humans who populated the Qesem cave near Tel Aviv in Israel, between 200,000 and 420,000 years ago, anticipated their future needs through dietary planning, and the paper is making headlines because previously early humans had not been thought capable of such dietary foresight.
The researchers first identified cut marks on most of the surfaces of the animal bones recovered from Qesem cave and they were found to be consistent with what the paper calls “preservation and delayed consumption.” It would appear early humans carefully removed the skins from dried bones which had been stored longer, and of the sample set of more than 80,000 animal bones, the researchers noted cut marks on 78% of the specimens analyzed.
Examples of cut marks associated to disarticulation and/or skinning on deer metapodials from Amudian and Yabrudian levels of Qesem Cave. (Image: Ruth Blasco/ Science Advances )
Reinterpreting Kill Site Activities
The study shows that inhabitants of the cave selected body parts of hunted animal carcasses like fallow deer, and that their limbs and skulls were taken to the cave while the remainder of the carcasses were stripped of meat and fat and abandoned at the hunting scenes, according to Professor Jordi Rosell from the Catalan Institute of Human Paleoecology and Social Evolution (IPHES), reported in the Independent.
Professor Ran Barkai from Tel Aviv University in Israel says bone marrow is high in nutrition and as such was long featured in the prehistoric diet , but until now, all evidence had pointed towards “immediate consumption” of marrow at the kill site . But the deer leg bones were found to have unique chopping marks on the shafts which is not caused when stripping fresh skin to fracture bones for the extraction of marrow, he said.
- The Sagas of the Icelanders shed light on Golden Age
- Survival of the Inuit in a Harsh and Unforgiving World
- The discovery of 4,000-year-old Siberian knight armor made of bone
Skinning in combination with tendon removal requires a specific use of the tool with an inclination almost parallel to the bone. (Image: Maite Arilla / Science Advances )
Delays In Eating
In the paper the researchers say that hunter-gatherer food storage was a “risk-reducing mechanism” offsetting resource scarcity and it is typically seen as evidence of “intensified subsistence activities”. Having recreated the cave’s environmental conditions the researchers determined that any stored bone marrow would have remained nutritious for up to nine weeks after the animals had been killed.
The storing of grease and marrow for delayed consumption has been documented among ethnographic groups such as in the Nunamiut Eskimo communities where bones are stored over the winter and processed in large batches. The Loucheux people also process the bones secondarily and with a slight delay, and both groups store it inside the stomach of caribou which they claim keeps the foods edible for 2 or 3 years.
Breaking into the bones to reach the marrow. (Image: Maite Arilla / Science Advances )
Rancid To Order?
In 2017, Dr J. D. Speth published a paper looking at the consumption of putrid meat and fish in the Eurasian middle and upper paleolithic and he argued that arctic and subarctic people’s consumption of fermented and deliberately rotted meat was a “desirable and nutritionally important component of human diets”, and not solely as starvation food. While it is clear ancient “processed” foods had dietary benefits the paper says the fats must have tasted and smelled “rancid.”
However, before we jump back to the old paradigm that early humans were dumb and just ate whatever was in front of them whether fresh or crawling with maggots, the researchers say they found it difficult to know if rancidity in meat impaired the consumption of aged marrow, or not. This might sound peculiar to many readers who frequent western restaurants but will make perfect sense to anyone living in Iceland, for example.
Some traditional Icelandic food (plate to the left: Hangikjöt, Hrútspungar, Lifrarpylsa, Blóðmör, Hákarl, Svið. plate to the right: Rúgbrauð, Flatbrauð) ( CC BY-SA 3.0 )
“The Worst, Most Disgusting And Terrible Tasting Thing”
Hákarl is a national dish of Iceland and as a þorramatur, a selection of traditional Icelandic food served at the midwinter festival þorrablót, it is a fermented or rotten Greenland shark cured with a traditional fermentation process by being hung up to dry for four to five months. Only after a strong ammonia-smell develops and enhances the fishy taste is it cut down and served.
In 2008 I ventured to Iceland and was presented with Hákarl in a business environment in which I couldn’t make excuses and avoid eating it. I took a cube in my almost shaking hand and immediately though that it smelled of cleaning products and I gagged on the smell of the high ammonia content. After the locals laughed, they advised me to pinch my nose and I did so, but even then I have to agree with the late television chef Anthony Bourdain who told the Wall Street Journal fermented shark was “the single worst, most disgusting and terrible tasting thing” he had ever eaten.
But the point is, Icelanders love rancid fish meats and so too might early humans living in the Qesem cave near Tel Aviv have loved a mouthful of heaving, rotting deer bone.
The Evolution of Diet
Some experts say modern humans should eat from a Stone Age menu. What's on it may surprise you.
Fundamental Feasts For some cultures, eating off the land is𠅊nd always has been𠅊 way of life.
It’s suppertime in the Amazon of lowland Bolivia, and Ana Cuata Maito is stirring a porridge of plantains and sweet manioc over a fire smoldering on the dirt floor of her thatched hut, listening for the voice of her husband as he returns from the forest with his scrawny hunting dog.
With an infant girl nursing at her breast and a seven-year-old boy tugging at her sleeve, she looks spent when she tells me that she hopes her husband, Deonicio Nate, will bring home meat tonight. “The children are sad when there is no meat,” Maito says through an interpreter, as she swats away mosquitoes.
Nate left before dawn on this day in January with his rifle and machete to get an early start on the two-hour trek to the old-growth forest. There he silently scanned the canopy for brown capuchin monkeys and raccoonlike coatis, while his dog sniffed the ground for the scent of piglike peccaries or reddish brown capybaras. If he was lucky, Nate would spot one of the biggest packets of meat in the forest—tapirs, with long, prehensile snouts that rummage for buds and shoots among the damp ferns.
This evening, however, Nate emerges from the forest with no meat. At 39, he’s an energetic guy who doesn’t seem easily defeated—when he isn’t hunting or fishing or weaving palm fronds into roof panels, he’s in the woods carving a new canoe from a log. But when he finally sits down to eat his porridge from a metal bowl, he complains that it’s hard to get enough meat for his family: two wives (not uncommon in the tribe) and 12 children. Loggers are scaring away the animals. He can’t fish on the river because a storm washed away his canoe.
The story is similar for each of the families I visit in Anachere, a community of about 90 members of the ancient Tsimane Indian tribe. It’s the rainy season, when it’s hardest to hunt or fish. More than 15,000 Tsimane live in about a hundred villages along two rivers in the Amazon Basin near the main market town of San Borja, 225 miles from La Paz. But Anachere is a two-day trip from San Borja by motorized dugout canoe, so the Tsimane living there still get most of their food from the forest, the river, or their gardens.
I’m traveling with Asher Rosinger, a doctoral candidate who’s part of a team, co-led by biological anthropologist William Leonard of Northwestern University, studying the Tsimane to document what a rain forest diet looks like. They’re particularly interested in how the Indians’ health changes as they move away from their traditional diet and active lifestyle and begin trading forest goods for sugar, salt, rice, oil, and increasingly, dried meat and canned sardines. This is not a purely academic inquiry. What anthropologists are learning about the diets of indigenous peoples like the Tsimane could inform what the rest of us should eat.
Rosinger introduces me to a villager named José Mayer Cunay, 78, who, with his son Felipe Mayer Lero, 39, has planted a lush garden by the river over the past 30 years. José leads us down a trail past trees laden with golden papayas and mangoes, clusters of green plantains, and orbs of grapefruit that dangle from branches like earrings. Vibrant red “lobster claw” heliconia flowers and wild ginger grow like weeds among stalks of corn and sugarcane. “José’s family has more fruit than anyone,” says Rosinger.
Yet in the family’s open-air shelter Felipe’s wife, Catalina, is preparing the same bland porridge as other households. When I ask if the food in the garden can tide them over when there’s little meat, Felipe shakes his head. “It’s not enough to live on,” he says. “I need to hunt and fish. My body doesn’t want to eat just these plants.”
The Tsimane of Bolivia get most of their food from the river, the forest, or fields and gardens carved out of the forest.
Click here to launch gallery.
As we look to 2050, when we’ll need to feed two billion more people, the question of which diet is best has taken on new urgency. The foods we choose to eat in the coming decades will have dramatic ramifications for the planet. Simply put, a diet that revolves around meat and dairy, a way of eating that’s on the rise throughout the developing world, will take a greater toll on the world’s resources than one that revolves around unrefined grains, nuts, fruits, and vegetables.
Until agriculture was developed around 10,000 years ago, all humans got their food by hunting, gathering, and fishing. As farming emerged, nomadic hunter-gatherers gradually were pushed off prime farmland, and eventually they became limited to the forests of the Amazon, the arid grasslands of Africa, the remote islands of Southeast Asia, and the tundra of the Arctic. Today only a few scattered tribes of hunter-gatherers remain on the planet.
That’s why scientists are intensifying efforts to learn what they can about an ancient diet and way of life before they disappear. “Hunter-gatherers are not living fossils,” says Alyssa Crittenden, a nutritional anthropologist at the University of Nevada, Las Vegas, who studies the diet of Tanzania’s Hadza people, some of the last true hunter-gatherers. “That being said, we have a small handful of foraging populations that remain on the planet. We are running out of time. If we want to glean any information on what a nomadic, foraging lifestyle looks like, we need to capture their diet now.”
So far studies of foragers like the Tsimane, Arctic Inuit, and Hadza have found that these peoples traditionally didn’t develop high blood pressure, atherosclerosis, or cardiovascular disease. 𠇊 lot of people believe there is a discordance between what we eat today and what our ancestors evolved to eat,” says paleoanthropologist Peter Ungar of the University of Arkansas. The notion that we’re trapped in Stone Age bodies in a fast-food world is driving the current craze for Paleolithic diets. The popularity of these so-called caveman or Stone Age diets is based on the idea that modern humans evolved to eat the way hunter-gatherers did during the Paleolithic—the period from about 2.6 million years ago to the start of the agricultural revolution𠅊nd that our genes haven’t had enough time to adapt to farmed foods.
A Stone Age diet “is the one and only diet that ideally fits our genetic makeup,” writes Loren Cordain, an evolutionary nutritionist at Colorado State University, in his book The Paleo Diet: Lose Weight and Get Healthy by Eating the Foods You Were Designed to Eat. After studying the diets of living hunter-gatherers and concluding that 73 percent of these societies derived more than half their calories from meat, Cordain came up with his own Paleo prescription: Eat plenty of lean meat and fish but not dairy products, beans, or cereal grains𠅏oods introduced into our diet after the invention of cooking and agriculture. Paleo-diet advocates like Cordain say that if we stick to the foods our hunter-gatherer ancestors once ate, we can avoid the diseases of civilization, such as heart disease, high blood pressure, diabetes, cancer, even acne.
That sounds appealing. But is it true that we all evolved to eat a meat-centric diet? Both paleontologists studying the fossils of our ancestors and anthropologists documenting the diets of indigenous people today say the picture is a bit more complicated. The popular embrace of a Paleo diet, Ungar and others point out, is based on a stew of misconceptions.
The Hadza of Tanzania are the world’s last full-time hunter-gatherers. They live on what they find: game, honey, and plants, including tubers, berries, and baobab fruit.
Click here to launch gallery.
Meat has played a starring role in the evolution of the human diet. Raymond Dart, who in 1924 discovered the first fossil of a human ancestor in Africa, popularized the image of our early ancestors hunting meat to survive on the African savanna. Writing in the 1950s, he described those humans as rnivorous creatures, that seized living quarries by violence, battered them to death … slaking their ravenous thirst with the hot blood of victims and greedily devouring livid writhing flesh.”
Eating meat is thought by some scientists to have been crucial to the evolution of our ancestors’ larger brains about two million years ago. By starting to eat calorie-dense meat and marrow instead of the low-quality plant diet of apes, our direct ancestor, Homo erectus, took in enough extra energy at each meal to help fuel a bigger brain. Digesting a higher quality diet and less bulky plant fiber would have allowed these humans to have much smaller guts. The energy freed up as a result of smaller guts could be used by the greedy brain, according to Leslie Aiello, who first proposed the idea with paleoanthropologist Peter Wheeler. The brain requires 20 percent of a human’s energy when resting by comparison, an ape’s brain requires only 8 percent. This means that from the time of H. erectus, the human body has depended on a diet of energy-dense food𠅎specially meat.
Fast-forward a couple of million years to when the human diet took another major turn with the invention of agriculture. The domestication of grains such as sorghum, barley, wheat, corn, and rice created a plentiful and predictable food supply, allowing farmers’ wives to bear babies in rapid succession—one every 2.5 years instead of one every 3.5 years for hunter-gatherers. A population explosion followed before long, farmers outnumbered foragers.
Over the past decade anthropologists have struggled to answer key questions about this transition. Was agriculture a clear step forward for human health? Or in leaving behind our hunter-gatherer ways to grow crops and raise livestock, did we give up a healthier diet and stronger bodies in exchange for food security?
When biological anthropologist Clark Spencer Larsen of Ohio State University describes the dawn of agriculture, it’s a grim picture. As the earliest farmers became dependent on crops, their diets became far less nutritionally diverse than hunter-gatherers’ diets. Eating the same domesticated grain every day gave early farmers cavities and periodontal disease rarely found in hunter-gatherers, says Larsen. When farmers began domesticating animals, those cattle, sheep, and goats became sources of milk and meat but also of parasites and new infectious diseases. Farmers suffered from iron deficiency and developmental delays, and they shrank in stature.
Despite boosting population numbers, the lifestyle and diet of farmers were clearly not as healthy as the lifestyle and diet of hunter-gatherers. That farmers produced more babies, Larsen says, is simply evidence that “you don’t have to be disease free to have children.”
The Inuit of Greenland survived for generations eating almost nothing but meat in a landscape too harsh for most plants. Today markets offer more variety, but a taste for meat persists.
Click here to launch gallery.
The real Paleolithic diet, though, wasn’t all meat and marrow. It’s true that hunter-gatherers around the world crave meat more than any other food and usually get around 30 percent of their annual calories from animals. But most also endure lean times when they eat less than a handful of meat each week. New studies suggest that more than a reliance on meat in ancient human diets fueled the brain’s expansion.
Year-round observations confirm that hunter-gatherers often have dismal success as hunters. The Hadza and Kung bushmen of Africa, for example, fail to get meat more than half the time when they venture forth with bows and arrows. This suggests it was even harder for our ancestors who didn’t have these weapons. 𠇎verybody thinks you wander out into the savanna and there are antelopes everywhere, just waiting for you to bonk them on the head,” says paleoanthropologist Alison Brooks of George Washington University, an expert on the Dobe Kung of Botswana. No one eats meat all that often, except in the Arctic, where Inuit and other groups traditionally got as much as 99 percent of their calories from seals, narwhals, and fish.
So how do hunter-gatherers get energy when there’s no meat? It turns out that “man the hunter” is backed up by “woman the forager,” who, with some help from children, provides more calories during difficult times. When meat, fruit, or honey is scarce, foragers depend on llback foods,” says Brooks. The Hadza get almost 70 percent of their calories from plants. The Kung traditionally rely on tubers and mongongo nuts, the Aka and Baka Pygmies of the Congo River Basin on yams, the Tsimane and Yanomami Indians of the Amazon on plantains and manioc, the Australian Aboriginals on nut grass and water chestnuts.
“There’s been a consistent story about hunting defining us and that meat made us human,” says Amanda Henry, a paleobiologist at the Max Planck Institute for Evolutionary Anthropology in Leipzig. 𠇏rankly, I think that misses half of the story. They want meat, sure. But what they actually live on is plant foods.” What’s more, she found starch granules from plants on fossil teeth and stone tools, which suggests humans may have been eating grains, as well as tubers, for at least 100,000 years—long enough to have evolved the ability to tolerate them.
The notion that we stopped evolving in the Paleolithic period simply isn’t true. Our teeth, jaws, and faces have gotten smaller, and our DNA has changed since the invention of agriculture. 𠇊re humans still evolving? Yes!” says geneticist Sarah Tishkoff of the University of Pennsylvania.
One striking piece of evidence is lactose tolerance. All humans digest mother’s milk as infants, but until cattle began being domesticated 10,000 years ago, weaned children no longer needed to digest milk. As a result, they stopped making the enzyme lactase, which breaks down the lactose into simple sugars. After humans began herding cattle, it became tremendously advantageous to digest milk, and lactose tolerance evolved independently among cattle herders in Europe, the Middle East, and Africa. Groups not dependent on cattle, such as the Chinese and Thai, the Pima Indians of the American Southwest, and the Bantu of West Africa, remain lactose intolerant.
Humans also vary in their ability to extract sugars from starchy foods as they chew them, depending on how many copies of a certain gene they inherit. Populations that traditionally ate more starchy foods, such as the Hadza, have more copies of the gene than the Yakut meat-eaters of Siberia, and their saliva helps break down starches before the food reaches their stomachs.
These examples suggest a twist on “You are what you eat.” More accurately, you are what your ancestors ate. There is tremendous variation in what foods humans can thrive on, depending on genetic inheritance. Traditional diets today include the vegetarian regimen of India’s Jains, the meat-intensive fare of Inuit, and the fish-heavy diet of Malaysia’s Bajau people. The Nochmani of the Nicobar Islands off the coast of India get by on protein from insects. “What makes us human is our ability to find a meal in virtually any environment,” says the Tsimane study co-leader Leonard.
Studies suggest that indigenous groups get into trouble when they abandon their traditional diets and active lifestyles for Western living. Diabetes was virtually unknown, for instance, among the Maya of Central America until the 1950s. As they’ve switched to a Western diet high in sugars, the rate of diabetes has skyrocketed. Siberian nomads such as the Evenk reindeer herders and the Yakut ate diets heavy in meat, yet they had almost no heart disease until after the fall of the Soviet Union, when many settled in towns and began eating market foods. Today about half the Yakut living in villages are overweight, and almost a third have hypertension, says Leonard. And Tsimane people who eat market foods are more prone to diabetes than those who still rely on hunting and gathering.
For those of us whose ancestors were adapted to plant-based diets𠅊nd who have desk jobs—it might be best not to eat as much meat as the Yakut. Recent studies confirm older findings that although humans have eaten red meat for two million years, heavy consumption increases atherosclerosis and cancer in most populations𠅊nd the culprit isn’t just saturated fat or cholesterol. Our gut bacteria digest a nutrient in meat called L-carnitine. In one mouse study, digestion of L-carnitine boosted artery-clogging plaque. Research also has shown that the human immune system attacks a sugar in red meat that’s called Neu5Gc, causing inflammation that’s low level in the young but that eventually could cause cancer. “Red meat is great, if you want to live to 45,” says Ajit Varki of the University of California, San Diego, lead author of the Neu5Gc study.
Many paleoanthropologists say that although advocates of the modern Paleolithic diet urge us to stay away from unhealthy processed foods, the diet’s heavy focus on meat doesn’t replicate the diversity of foods that our ancestors ate—or take into account the active lifestyles that protected them from heart disease and diabetes. “What bothers a lot of paleoanthropologists is that we actually didn’t have just one caveman diet,” says Leslie Aiello, president of the Wenner-Gren Foundation for Anthropological Research in New York City. “The human diet goes back at least two million years. We had a lot of cavemen out there.”
In other words, there is no one ideal human diet. Aiello and Leonard say the real hallmark of being human isn’t our taste for meat but our ability to adapt to many habitats𠅊nd to be able to combine many different foods to create many healthy diets. Unfortunately the modern Western diet does not appear to be one of them.
The Bajau of Malaysia fish and dive for almost everything they eat. Some live in houses on the beach or on stilts others have no homes but their boats.
Click here to launch gallery.
The latest clue as to why our modern diet may be making us sick comes from Harvard primatologist Richard Wrangham, who argues that the biggest revolution in the human diet came not when we started to eat meat but when we learned to cook. Our human ancestors who began cooking sometime between 1.8 million and 400,000 years ago probably had more children who thrived, Wrangham says. Pounding and heating food “predigests” it, so our guts spend less energy breaking it down, absorb more than if the food were raw, and thus extract more fuel for our brains. 𠇌ooking produces soft, energy-rich foods,” says Wrangham. Today we can’t survive on raw, unprocessed food alone, he says. We have evolved to depend upon cooked food.
To test his ideas, Wrangham and his students fed raw and cooked food to rats and mice. When I visited Wrangham’s lab at Harvard, his then graduate student, Rachel Carmody, opened the door of a small refrigerator to show me plastic bags filled with meat and sweet potatoes, some raw and some cooked. Mice raised on cooked foods gained 15 to 40 percent more weight than mice raised only on raw food.
If Wrangham is right, cooking not only gave early humans the energy they needed to build bigger brains but also helped them get more calories from food so that they could gain weight. In the modern context the flip side of his hypothesis is that we may be victims of our own success. We have gotten so good at processing foods that for the first time in human evolution, many humans are getting more calories than they burn in a day. “Rough breads have given way to Twinkies, apples to apple juice,” he writes. “We need to become more aware of the calorie-raising consequences of a highly processed diet.”
It’s this shift to processed foods, taking place all over the world, that’s contributing to a rising epidemic of obesity and related diseases. If most of the world ate more local fruits and vegetables, a little meat, fish, and some whole grains (as in the highly touted Mediterranean diet), and exercised an hour a day, that would be good news for our health𠅊nd for the planet.
The Kyrgyz of the Pamir Mountains in northern Afghanistan live at a high altitude where no crops grow. Survival depends on the animals that they milk, butcher, and barter.
Click here to launch gallery.
On my last afternoon visiting the Tsimane in Anachere, one of Deonicio Nate’s daughters, Albania, 13, tells us that her father and half-brother Alberto, 16, are back from hunting and that they’ve got something. We follow her to the cooking hut and smell the animals before we see them—three raccoonlike coatis have been laid across the fire, fur and all. As the fire singes the coatis’ striped pelts, Albania and her sister, Emiliana, 12, scrape off fur until the animals’ flesh is bare. Then they take the carcasses to a stream to clean and prepare them for roasting.
Nate’s wives are cleaning two armadillos as well, preparing to cook them in a stew with shredded plantains. Nate sits by the fire, describing a good day’s hunt. First he shot the armadillos as they napped by a stream. Then his dog spotted a pack of coatis and chased them, killing two as the rest darted up a tree. Alberto fired his shotgun but missed. He fired again and hit a coati. Three coatis and two armadillos were enough, so father and son packed up and headed home.
As family members enjoy the feast, I watch their little boy, Alfonso, who had been sick all week. He is dancing around the fire, happily chewing on a cooked piece of coati tail. Nate looks pleased. Tonight in Anachere, far from the diet debates, there is meat, and that is good.
The people of Crete, the largest of the Greek islands, eat a rich variety of foods drawn from their groves and farms and the sea. They lived on a so-called Mediterranean diet long before it became a fad.
Click here to launch gallery.
Ann Gibbons is the author of The First Human: The Race to Discover Our Earliest Ancestors. Matthieu Paley photographed Afghanistan’s Kyrgyz for our February 2013 issue.
The magazine thanks The Rockefeller Foundation and members of the National Geographic Society for their generous support of this series of articles.
Why eating like we did 20,000 years ago may be the way of the future
When it comes to our eating habits, it's clear that we're doing it wrong. We may be in the midst of health crisis, but there are few practical solutions for dealing with it.
But now a growing chorus of people are claiming that modern and processed foods are to blame, insisting that we should instead take an "evolutionary approach" to our diets and turn to foods that were eaten by our Paleolithic ancestors. Critics have responded by proclaiming it a misguided step in the wrong direction. Either way, Paleo eating has become a major lifestyle.
There's no question that something's terribly wrong with the way we eat. Nearly one in three Americans is overweight or obese, and rates of diabetes continues to rise. These conditions, along with steady rates of heart disease, cancer, and inflammatory problems, have led some to predict that the young generation now growing up will the first ever in our history to have shorter lifespans than their parents.
Part of the problem is that virtually everything we thought we knew about eating is wrong the current health crisis is in no small part caused by widespread and pervasive food confusion - and much of driven and reinforced by the modern food industry. As counterintuitive as it might seem, we now know that saturated fats are good and that salt has been unfairly vilified . It's becoming apparent that whole grains are extremely unhealthy , and that sugar is far, far worse than we previously thought, a conclusion that has led some experts to essentially describe it as poison .
At the same time, grocery stores are filled with fat-free and fat-reduced products - and the obesity problem persists. Fad diets have virtually no staying power, much to the delight of those offering them. We have become a fat-starved people, who, in its place, have substituted high density carbohydrates like bread, white potatoes, rice, and other sugar infused foods.
But like so many things in life, there often comes a time for corrections, and diet is no exception. To address the situation, a growing number of people are proclaiming that modern foods are to blame, or more specifically, those foods that came about as the result of the Agricultural Revolution and, more recently, the larger food industry. The answer to many of our health problems, they suggest, is to look at our evolutionary history and see what it has to say about what our bodies were actually meant to eat.
An evolutionary approach to eating
It's been said that nothing in biology is worth knowing outside of the context of evolutionary biology. Human nutrition is no exception.
The human genome has remained relatively unchanged for the past 120,000 years - a lengthy expanse of span of time during which our Paleolithic hunter-gatherer ancestors primarily ate meat, with some vegetables, fruits, nuts, and seeds. Evolution ensured that humans were well adapted to eat those types of foods, and their bodies were happy to receive them.
It's only been in the last 10,000 years, however, that humans have started to engage in agriculture, a technological and sociological development that has resulted in increased reliance on grains, legumes, and dairy — what are now Neolithic staples. Trouble is, our bodies haven't the foggiest idea what to do with these foods, and in some cases, they're actually toxic.
Shockingly, it's been over these past 10,000 years that humans have become significantly shorter, fatter, less muscular, and more prone to disease. It's this realization that has led some thinkers like Jared Diamond to proclaim that agriculture was the worst mistake our species has ever made . While it's been great for society as a whole, from a health perspective it's proven catastrophic for individuals.
Consequently, a new approach to eating has emerged called the Paleolithic Diet, or simply "Paleo" for short. Advocates of this diet focus on eating unprocessed foods like lean meat, seafood, roots, tubers, fruits, and vegetables. Not only are these foods comprehensible to the human digestive system, they pack much more nutrition per calorie than typical Neolithic and processed foods.
In terms of what not to eat, followers of the Paleo diet refrain from eating grains, legumes, and dairy — each of which contains toxic elements that our bodies have never had a chance to adapt to. These foods fatten our physiques and shorten our lives. Paleo advocates claim that by avoiding these foods, and eating more along the lines of how our ancestors ate, we can stave off such problems as obesity, diabetes, cancer, and cardiac disease.
The Paleo Pushers
One of the leaders of the Paleo movement is biochemist Matt Lalonde . He believes that the "eating like a caveman" approach is helpful, but incomplete. It's not enough to just eat in an apparently evolutionary-friendly way. Rather, we need to do actual science and determine optimal eating habits. It just turns out that his findings tend to support the central assumption made by Paleo advocates.
Science also brushes shoulders with Paleo at the Ancestral Health Symposiums that are held once a year. Last year's confab featured over 50 speakers representing a diverse cross-section of disciplines. Titles of presentations included, "The Trouble with Fructose: A Darwinian Perspective", "Heart Disease and Molecular Degeneration", and "What Foods Make My Brain Work Best?" The symposiums have featured heavy hitters in the health sciences, including Loren Cordain, Gary Taubes, and Robb Wolf. These symposiums demonstrate how a niche group of scientists, medical practitioners, and health experts are paving the way for what is likely to become a health and wellness paradigm for the future.
The problem with Neolithic food
One of the many refrains of the Paleo movement is that Neolithic foods cause a number of health problems. Take whole grains for example. Paleo advocates believe that a primary reason for our poor health relative to our Paleolithic ancestors is the introduction of a protein called gluten, which is found in many staple grains.
That's because gluten may, in fact, be a poison. Many plants have evolved chemical defenses to dissuade animals from eating them. Think of it as a kind of chemical warfare, with gluten being a particularly nasty weapon. It turns out that people suffering from the celiac disease, which is an autoimmune disorder, aren't the only ones sensitive to gluten. In fact, it has been shown that all humans react poorly to it.
According to Robb Wolf, author of The Paleo Solution: The Original Human Diet , it's for this reason that everyone should avoid gluten which tends to be delivered by consuming whole grains. Other foods that cause similar reactions include lectins, phytates, and protease inhibitors. Together, these compounds limit protein and mineral absorption while inflicting severe inflammatory responses. Wolf compares this effect to having poison oak lining our intestinal walls.
These gut-inflaming elements cause inflammation to the digestive tract, and by consequence, to the rest of the body. And as we're increasingly learning, inflammation is a contributor to a number of health problems, including impairments to the immune system and the body's ability to recover.
Wolf also suggests we stay away from legumes, dairy, sugar, and processed vegetable oils. These have the same gut-irritating and inflammation-promoting properties. As a result, Paleo devotees tend to refrain from cheese, milk, soy products, and peanuts, which technically speaking is a legume. They also avoid all processed foods, which tends to be laden in added chemicals and preservatives.
At the same time, Paleo devotees laud the benefits of fat. Not only does it taste good and have the ability to stave off depression, it's fairly essential. Robb Wolf is a big promoter of increasing our total fat intake, suggesting that we consume half of our calories from fat.
Indeed, the hysteria against fat is starting to wane. The findings of a recent meta-analysis of 21 studies published in the American Journal of Clinical Nutrition supports the revelation that no single study could associate saturated fat with increased risk of coronary heart disease, stroke, or coronary vascular disease. And at the same time, it does the mind and body good.
An indelible component of Paleo is eating meat. It's valued highly for its protein, fats, and essential nutrients. But not all meat is the same. Folks on the Paleo diet place considerable emphasis on eating grass fed organic meats as opposed to grain fed stock raised in factory farms. It's been shown that grain fed beef contributes to a skewed omega 3 to 6 ratio and that it impairs our ability to absorb nutrients. Factory farms also infuse their livestock with hormones and other questionable chemical concoctions. Paleo dieters are increasingly turning away from these foods, preferring instead to eat "clean" meat.
It's for this reason that dairy tends to be vilified in Paleo circles. Most milk comes from grain fed beef and cause the same inflammatory problems as the meat. Cows, it would seem, react just as badly from gluten in grains as humans do.
What about the ethics?
Critics of the Paleo diet complain that the meat-centric approach is completely out of line with other trends, namely the shift to vegetarian diets. Paleo eaters are often seen by vegetarians and vegans as a selfish group that are looking to take society backwards instead of forward.
Ancestral health enthusiasts, on the other hand, make the claim that they are in fact charting a course to the future by overturning conventional models of meat production. Paleo eaters tend to avoid factory farmed foods, insteading buying organic and free range meats and eggs from farmer's markets and organic food stores. Moreover, they are also advocates of sustainable, humanitarian farming practices and the promotion of healthy, untainted foods. An animal that got to live its life on a farm grazing in pasture, they argue, is a far cry from the crammed and deplorable conditions found in most factory farms. It's for this reason that they claim to be "conscious carnivores" and purveyors of a more sustainable future.
Quality, rather than quantity, is a central tenant of the Paleo diet. It's for this reason that people on Paleo are content to spend two to three times more on their foods than what's found at regular supermarkets.
Clearly, Paelo meat-eaters and vegetarians are going to forever disagree on the ethics of the matter. Fundamentally, vegetarians argue that it's never right to raise, kill, and eat another animal, whereas Paleo folk contend that personal health takes precedent, and that it's normal and natural for us to eat meat it's what we're evolved for.
Dietary habits are an incredibly personal thing. Fewer subjects raise more controversy and heated opinions than food politics. Ultimately, however, when it comes down to making dietary choices, it tends to be about what works best for the individual - whether it be on account of health, environmental, or ethical considerations.
Essentially, people need to ask themselves about how their food choices make them feel about themselves as moral agents, and how those choices impact on their personal health and well-being.
[This post has been edited slightly to underscore the fact that the Paleo diet is based on speculation informed by science, rather than scientific evidence. - Ed.]
Share This Story
Get our newsletter
This post reads more like an editorial than most that I have seen on io9. Reviewing the embedded links, four go to people already involved in this movement, two are to pop-sci publications, two are to the NY Times and a grand total of one is to a meta-analysis in a peer reviewed journal. That sole peer reviewed paper was not written on the paleo diet, just on the safety of fats in diet. Controlled studies have been done on other diets and published in high quality, peer reviewed journals like NEJM and JAMA, have any been done on the paleo diet?
Mr Dvorsky has failed to allow any of the other side of the story to appear here and I'm sure somebody must disagree with this.
My experience with fad diets of all kinds (including Atkins and South Beach which also tell people to be less concerned about fat intake) is that they work mostly because they force people to pay attention to what they eat and not just snack on whatever is handy all day long. That's an oversimplification, but that one little fact accounts for most of the benefit achieved with any strict diet.
Scottish Highlanders Traditional Diet
Part 1 of a 3 part series on the nutrition of the Scottish Highlanders
The plow of the farmer, tilling the soil at the old battlefield of Assaye, in India, hits something solid. The farmer examines the bone his plow has turned up. It is a human thighbone, much larger, stronger, and thicker than usual. The farmer knows he has found the remains of one of the Scottish Highlanders.
Once, the Highlanders of Scotland were a group of people noted for their incredible strength, size, health, endurance, vitality, and prowess in battle. Armed only with swords and small shields, they consistently defeated much larger armies of professional soldiers armed with guns and cannon. Finally defeated by overwhelming numbers and superior technology, they were recruited by their conquerors, and won victories for them all over the world. While most of the men were away fighting for the British Empire, their families were eventually driven off their land and out of their country, to accommodate the demands of industrial agriculture. These people were known as Highlanders.
Nutrition and Physical Strength of Scottish Highlanders
What was the secret of the Highlanders’ prowess? Why were they larger, stronger, faster, and able to defeat much larger groups of enemies in hand-to-hand combat? What gave them their incredible endurance, which enabled them to march sixty miles over steep roadless hills and fight a battle—all in one day? Why did they recover from horrible wounds that would have been fatal to most other men?
It could not have just been their hard physical work, because all the peasants of Europe and India did hard physical work. The difference was in their diet. While most of the people of Europe ate a plant-based diet of grains and vegetables, the Highlanders ate mostly animal foods, just like their ancestors did.
The Highlands of Scotland is a high land, full of hills, mountains, streams, and valleys. The soil is not very good for agriculture, but provides great grazing lands.
What did the Scottish Highlanders Eat?
The Scottish Highlanders based their diet, first, on the raw milk of their herds. They kept large herds of small, agile cattle, and large herds of tiny sheep, and large herds of goats. All of these animals produced milk, which was drunk and added to porridges raw, and made into raw cheese and raw butter. The cheese and butter were used at all times, but especially in the harsh, cold winters.
The Scottish Highlanders diet varied with the seasons. During the spring and summer, wild game of all kinds, including the native red deer, were hunted and eaten. Fresh fish was a vital part of the diet during these seasons, as the many rivers and streams were rich with salmon and many other kinds of wild fish. Beef was not eaten during good weather, which led some travelers to mistakenly conclude that the Scottish Highlanders did not eat beef.
During the fall, many cattle, sheep, and goats were killed, and their meat salted to provide meat during the cold part of the fall and during the long winter. Every part of the animal was used for food, including all the internal organs. The famous Scottish dish known as Haggis, made from innards and oatmeal cooked in the stomach of a sheep, originated in the Highlands. Few vegetables were available (though onions and turnips could be found in season, along with some wild vegetables, such as nettles). The main fruit available were wild berries, in season.
The only grains that could be grown in the Highlands were barley and oats, which were made into breads, porridges, and cakes. Sugar was largely unavailable, though some honey could be found. Grains were usually eaten with raw milk, raw butter, or raw cheese, or all of them. Oats were dried and carried in a pouch in wartime as a survival food.
Raw Milk and the Scottish Highlanders
It should be understood that the Highland cattle were not bred for giving huge amounts of milk, like modern dairy cattle. The amount of milk they produced was dependent on the quality of the plants they grazed on. In a bad year, when a particularly cold winter had damaged the native forage, they produced less milk. At these times, the Highlanders would take some blood from their cattle, and use it for food, often in the form of blood puddings.
This diet produced a group of people who were much stronger, much larger, and much healthier than most other Europeans. Their incredible vitality was not always stopped by age. One Highlander became famous in England when he enlisted in a Highland regiment at the age of seventy, and fought in the French and Indian War, becoming famous for his prowess with the broadsword, when he led small parties of men into the thick brush to hunt down enemy sharpshooters.
This is the first of a three part series on the Scottish Highlanders Traditional Diet. The second part will describe some of the incredible feats the Highland soldiers performed in battle, feats possible only because of their superior strength, agility, and endurance. It will also describe the incredible ability the Highlanders had to heal from the most severe wounds, without medical treatment.
Stanley Fishman is the author of Tender Grassfed Meat. His newest book is Tender Grassfed Barbecue: Traditional, Primal and Paleo He is a frequent guest blogger on Hartkeisonline.com. See Stanley’s other “recipes” for good health on his Guest Blogger page.
This post is part of the Real Food Wednesday blog carnival, please visit Kelly the Kitchen Kop for an amazing array of real food recipes and stories!
Although Mayans primarily relied on non-meat sources for food, they also consumed meat when available.
Three sources of meat existed for the Mayans: they could find meat through hunting, they could exploit the resources available in waters such as fish, or they could domesticate animals and get meat from them.
Mayans exploited all three sources whenever there was a need for meat.
They hunted animals such as deer, monkey, manatee, guinea pig, armadillo, wild pheasant and peccary.
Traps and spears, and later bows and arrows, were commonly used by Mayans during hunting.
Among the animals domesticated by the Mayans for meat were turkey and dog.
From maritime sources, Mayans fished out shrimp, fish, lobster and various other varieties of food.
Along the Yucatan peninsula, Mayans often hunted for saltwater fishes and treated them so as to preserve them over long periods of time and even trade them to other Mayan cities.
Diet and Nutrition
Despite the escalating unemployment rate and growth of breadlines on city streets, malnourishment was not considered a major health concern of the early 1930s. Heart disease, cancer, pneumonia, and infectious diseases were the leading causes of death. Though many were hungry, starvation was not a reality. Deaths from hunger and thirst was less than one per 100,000 of the total population.
Diets of ordinary people improved with the new processing and preparation technologies, such as canning and refrigeration, made available through the 1930s. Vegetables were consumed in greater abundance, and improved preservation and storage allowed for consumption longer through the year. In addition, federal food relief programs introduced foods to the South that were foreign to residents there, such as whole-wheat flour, coconuts, and grapefruit juice, as opposed to the customary pork and white flour. Overall, however, the novelty of canned foods led to a decrease in eating fresh foods that offered greater nutrition. Meat consumption during the Depression dropped from 130 to 110 pounds per person per year. Dried beans took their place, with the average American eating almost ten pounds a year, up from six pounds a year in 1920.
World War II
The conservative food usage of the Great Depression turned out to be needed again during World War II, which for the United States began late in 1941.
Huge shipments of food shipped to Europe led to shortages and hoarding in America. Industry could not produce enough of certain products to satisfy the demand both overseas and at home. The U.S. government introduced a system for distributing certain scarce products, such as food, among the U.S. population, called rationing. Each household would be given a certain number of coupons by the government for certain products, such as gas. Sugar rationing came early and caused significant adjustments in homemakers' baking. War cakes again were full of raisins and other dried fruit just, as in Depression days. Beef was scarce by the spring of 1942. Just as in Depression times, rural people hunted game, and others depended on chickens, cheese, and eggs. By 1943 butter and canned goods were also rationed.
The establishment of a large government system for supplying food to the most needy during the Depression paved the way for yet another government system for rationing certain food items and other commodities. For many who had to cut back considerably during the Depression, war rationing seemed much less of a burden than it might have otherwise.
In 1943 General Mills published a Betty Crocker booklet, Your Share, which showed women, many of whom had been teenagers during the Great Depression, how to prepare appetizing, healthy meals with foods that were available. The booklet included charts to use corn syrup or honey in place of sugar for cakes. Homemakers turned to the familiar casseroles and food extenders—macaroni, potatoes, beans, rice, and dried peas—of the previous decade.
More About… Betty Crocker
In 1924, five years before the stock market crash and the onset of the Great Depression, an program from WCCO radio in Minneapolis, Minnesota, introduced "Betty Crocker's Cooking School of the Air." Though it began as a local radio program, Betty Crocker was soon broadcast nationwide to an audience of homemakers and cooks. It was the first women's service program to be broadcast nationally. "Betty Crocker" was, in fact, Marjorie Child Husted, a home economist with the food company General Mills.
Unbeknownst to her listeners, Betty Crocker was not an actual woman, but simply a name created by advertiser Sam Gale for Washburn Crosby, a milling company that merged with General Mills in 1924. The need for Betty began when, in October 1921, Washburn Crosby's advertising department ran a jigsaw puzzle advertisement in a national magazine offering pincushions resembling a miniature sack of flour to anyone who could put the pieces together. This was a promotional ad campaign for Washburn Crosby's flour, Gold Medal flour. The company received an overwhelming number of answers. Accompanying the puzzle answers were requests for answers to food questions: "Why doesn't my dough rise?" and "How long do you knead?" Sam Gale was put in charge of the responses. Agnes White and Janette Kelley, two of Crosby's home economists, wrote the responses, but Gale signed the letters. Soon they all agreed a woman should be telling women how to shape their rolls and buns, and Betty Crocker was born.
The name "Betty" was chosen because it was a nice, friendly name. Crocker was chosen in honor of a retired director of the company, William G. Crocker. A piece of paper circulated in the office for women of the company to sign the name. The easiest to read was chosen as Betty Crocker's signature. This fictional fantasy quickly became a spokesperson for Crosby's Gold Medal flour. Women wrote letters to Betty and sent her gifts. In the early 1930s, Betty Crocker published a meal planning booklet that advised women on how to maintain an adequate diet on Depression-era wages and relief foods.
Marjorie Child Husted served as Betty's radio voice for many years. Not until 1936 did Betty have a face to show the public. Neysa McMein, an artist-illustrator, created the face as a composite of her perception of women who worked in test kitchens. From 1936 onward, the public saw portraits of Betty on cookbooks, magazines, and General Mills' products. Millions tuned into Betty's cooking school radio program during the Depression and World War II for her advice on low cost menus that would keep their families well fed. When Betty Crocker's Picture Cookbook appeared in 1951, it became an instant bestseller.
Betty's face underwent six major changes over the years, including one in 1996. The 1996 change was a computerized composite of 76 American women of varying ages and ethnic backgrounds, intended to give Betty Crocker widespread appeal. Betty aged well, as her face retained its 32-year-old youthful appearance of the first portrait in 1936.
Later versions of Betty Crocker's Illustrated Cookbook were entitled simply Betty Crocker's Cookbook. The many revisions and additions continued to sell well at the beginning of the twenty-first century, when the book's contents also became available on a floppy computer disk. Betty Crocker's Cookbook is acknowledged as America's top selling cookbook, with an estimated 55 million copies sold by the mid-1990s.
Located in backyards, vacant lots, or adjacent to war plants, 20 million "victory gardens" grew 40 percent of the country's vegetables. In 1942, all across America, gardeners bought Burpee's victory garden packets, which contained fifteen vegetables for one dollar, the Suburban Garden package of 25 varieties also for one dollar, and the Country Garden package of thirty types for three dollars. Even Eleanor Roosevelt had a victory garden on the White House lawn, and Harry Hopkins and his family tended to it. Home canning of vegetables was prevalent in approximately three-fourths of American homes, and families produced an average of 165 jars a year.
With the social dislocations of the war, homemakers in temporary military housing or wives working for the war effort turned to processed and ready-to-eat foods. Many of these foods had been developed in the 1930s during the Great Depression. Large food companies gained strength as they received massive orders from the armed forces. Hormel's Spam, first made popular in the 1930s, was the soldier's staple. General Foods' sales to the government rose from $1,477,000 in 1941 to $37,840,000 in 1944.
The processed food industry continued to grow throughout the twentieth century. By the later 1950s, cooking habits changed. There existed greater demand for convenience in food preparation and consumption. For example, demand for fresh fruits declined in favor of the less nutritious canned preserved fruit. Also, soft drinks and pizza became popular. Easier to prepare packaged foods became common as did as fast food restaurant chains. The processed food industry was concentrated in the large food producing companies of Beatrice Foods, Borden, Campbell Soup, General Foods, General Mills, Heinz, Kellogg, Nestlé, Kraft, Pillsbury, and other standard brands. New appliances included food processors, blenders, and in the 1980s, microwave ovens.
The agricultural community of farmers consisted of less than 3 percent of the population as of 1980, down from 25 percent in 1930 and 23 percent in 1940. Most people did not grow their own foods, and home gardening was more of a hobby than a supplemental and necessary food source. They instead bought their food in large chain food stores. Frozen and canned foods dominated the shelves, and fast and easy cooking for many meant popping a frozen dinner into the microwave to heat and eat. People did still cook fresh meals, but much less attention was paid to utilizing the most of every food source than it was on putting together a meal with the most ease and convenience.
As the twenty-first century began, the simple cooking of the 1930s persisted among few families. Depression-era recipes and cookbooks gained popularity by the end of the twentieth century, nostalgically harkening back to a different period of food preparation.
One development in the modern kitchen that was far beyond the imaginations of Depression families was the use of genetically modified foods. Though still in the experimental stages in early 2001, some farmers were planting and growing genetically modified crops. Designed to repel threatening insects that could affect plant health and to grow hardier crops, genetically modified foods were not without controversy. Environmental groups in particular renewed promotion of organic foods, so called because they were grown naturally and without the use of pesticides.
In the more than five decades since the Great Depression, Americans have gone from integrating new developments like Birds Eye frozen foods into their meals to having many of those same new Depression food items serve as a natural and daily part of their lives. Foods that were once scarce or utilized to the last scrap were being examined for genetically modified "improvements." The food industry in America had undoubtedly come a long way since the 1930s, and many of the most far-reaching impacts on the industry arose out of the Depression.
The Federal Surplus Commodities Corporation (FSCC) established the Food Stamp Plan on May 16, 1939, to ensure that surplus agricultural products got to the needy. The FSCC grew out of the New Deal program called the Federal Surplus Relief Corporation (FSRC), created in the fall of 1933 for emergency relief for the hungry. Although the Food Stamp Plan was cancelled during World War II when full employment made it unnecessary, it was reestablished as a pilot program under President John F. Kennedy (served 1961–1963) in 1961. Kennedy directed the Agriculture Department to establish an experimental program based on the original Food Stamp Plan. This experiment later became a full-fledged program under the Food Stamp Act in 1964. By 1971 Congress had established uniform standards of eligibility for food stamps. During the later twentieth century, various changes to regulations and available funding tended to differ under each new administration.
As of 2001 the Food Stamp Program operated under the Agriculture Department's Food and Nutrition Services. The program helped 7.3 million households put food on the table in 2000. Participating individuals used food stamp coupons just as they would cash at most grocery stores. Considered a transitional measure for individuals moving from welfare to work, the food stamp program, which first got its start due to the hard times of the Depression, is a cornerstone of federal food assistance programs.
Federal Food, Drug, and Cosmetic Act of 1938
During the Great Depression, many consumers became convinced that food, drug, and cosmetic businesses were practicing price gouging—charging too much money for products—and were engaged in consumer fraud. Consumer fraud usually took the form of companies claiming their products could perform better or benefit the consumer more than they actually could. The product was basically misrepresented in advertising. The existing Pure Food and Drug Act of 1906 proved ineffective since it did not regulate drug makers' performance claims unless fraud could be proven. Concern also arose as a host of new processed foods came on the market. Their quality and standard of safety was virtually uncontrolled, and the health impact for consumers was unknown. The American Medical Association (AMA) and state and federal drug officials all attempted to strengthen food and drug laws and to expose wrongdoing and misleading claims.
In 1933 Rexford G. Tugwell, assistant secretary of agriculture and advisor to President Franklin Roosevelt, led the drafting of a new food and drug bill. The bill greatly expanded government control over the drug-and food-processing industry. Drug claims contrary to general medical opinion were made illegal, and medical ingredients were now required to be clearly disclosed. Food labels were also required to list all ingredients. The government could establish quality and fill of container standards, and government officials could
|Food Stamp Participation, 1970-1995|
|Year||Number of Recipients in millions||Cost|
|1970||4.3 million||$577 million|
inspect factories to ensure the law was obeyed. The food, cosmetic, and drug industries adamantly opposed the bill and successfully lobbied against it.
Senator Royal Copeland of New York, whose main interest in Congress was food and drug issues, became a major supporter of the bill and continued to fight for its passage. Then, in 1937, 107 people, including many children, died from taking a drug called sulfanilamide, which a small pharmaceutical plant in Bristol, Tennessee, produced. A toxic chemical, diethylene glycol, had been added without the company first checking it for human toxicity. An angered public called for congressional action. Under the leadership of Senator Copeland and Representative Clarence Lea of California, and with the assistance of Walter Campbell, head of the Food and Drug Administration (FDA), the Federal Food, Drug, and Cosmetic Act of 1938 passed through Congress. President Franklin Roosevelt signed it in June 1938 as one of the last major New Deal measures.
The new act greatly expanded consumer protection and increased the minimal penalties of violation set out in the 1906 act. The new provisions extended government oversight and standards control to cosmetics and medical devices, such as the modern day pacemaker, required new drugs to be shown safe before marketing, and eliminated the requirement to prove intent to defraud in drug mislabeling cases. The law authorized factory inspections and allowed food standards of identity, quality, and fill of containers to be set. For example, a product labeled "fruit jam" must contain 45 parts fruit and 55 parts sugar or sweetener there may not be excessive pits in canned cherries and minimum weights of solid food must remain after drainable liquid is poured off of canned foods.
Early Human Diets
By Andrew Ng
The old saying “You are what you eat” takes on new significance in the most comprehensive analysis to date of early human teeth from Africa.
Prior to about 3.5 million years ago, early humans dined almost exclusively on leaves and fruits from trees, shrubs, and herbs—similar to modern-day gorillas and chimpanzees. However, about 3.5 million years ago, early human species like Australopithecus afarensis and Kenyanthropus platyops began to also nosh on grasses, sedges, and succulents—or on animals that ate those plants.
Evidence of this significant dietary expansion is written in the chemical make-up of our ancestors’ teeth. These findings are reported in a series of four papers published this week in the Proceedings of the National Academy of Sciences, by an international group of scientists spread over three continents.
“These papers present the most exhaustive isotope-based studies on early human diets to date,” says the Academy’s own Zeresenay Alemseged, Senior Curator and Chair of Anthropology, and co-author on two of the papers (available here and here). “Because feeding is the most important factor determining an organism’s physiology, behavior, and its interaction with the environment, these findings will give us new insight into the evolutionary mechanisms that shaped our evolution.”
Plants can be divided into three categories based on their method of photosynthesis: C3, C4, and CAM. C3 plants (trees, shrubs, and herbs) can be chemically distinguished from C4/CAM plants (grasses, sedges, and succulents) because the latter incorporate higher amounts of the heavier isotope carbon-13 into their tissues. When the plants are consumed, the isotopes become incorporated into the animal’s own tissues—including the enamel of developing teeth. Even after millions of years, scientists can measure the relative amounts of carbon-13 in teeth enamel and infer the amount of C3 vs. C4/CAM plants in an animal’s diet.
“What we have is chemical information on what our ancestors ate, which in simpler terms is like a piece of food item stuck between their teeth and preserved for millions of years,” says Alemseged.
These papers represent the first time that scientists have analyzed carbon isotope data from all early human species for which significant samples exist: 175 specimens representing 11 species, ranging from 4.4 to 1.3 million years in age. The results show that prior to 3.5 million years ago, early humans ate almost exclusively C3 plants. But starting about 3.5 million years ago, early humans acquired the taste for C4/CAM plants as well, even though their environments seemed to be broadly similar to their ancestors’. The later genus Homo, including modern-day Homo sapiens, continues the trend of eating a mixture of C3 and C4/CAM plants—in fact, people who enjoy mashed potatoes with corn are practicing a 3.5 million-year-old habit.
What the studies cannot reveal is the exact identity of the food, and whether it also included animals that ate C4/CAM plants (an equally valid way to acquire carbon-13). Possible C4/CAM-derived meals include grass seeds and roots, sedge underground stems, termites, succulents, or even small game and scavenged carcasses. In 2010, Alemseged and his research team published the earliest evidence for meat consumption using tools, dating back to 3.4 million years ago—an additional line of evidence showing a dietary shift in human evolution.
“The change in isotopic signal documented by the new studies, coupled with the evidence for meat-eating in Australopithecus afarensis from Dikika around 3.5 million years ago, suggests an expansion in the dietary adaptation of the species,” says Alemseged.
The authors of this week’s papers also sampled fossils of giraffes, horses, and monkeys from the same environments and saw no significant change in their carbon isotope values over time—suggesting that the unique dietary transformation of early humans did not apply to other mammals on the African savanna. The question of what drove the transformation, however, remains unresolved.
Andrew Ng is Communications Manager at the California Academy of Sciences.
Images: National Museums of Kenya. Photos by Mike Hettwer, Yang Deming
The Real Caveman Diet
Did real cavemen follow the “prehistoric diet”?
Janek Skarzynski/AFP/Getty Images.
Russian scientists claim to have grown a plant from the fruit of an arctic flower that froze 32,000 years ago in the Arctic. That’s about the same time the last Neanderthals roamed the Earth. This particular plant doesn’t produce an edible fruit analogous to an apple or nectarine, but rather a dry capsule that holds its seeds. Did hominids eat fruits and veggies during the Neanderthal era?
They definitely ate fruit. Last year, paleoanthropologists found bits of date stuck in the teeth of a 40,000-year-old Neanderthal. There’s evidence that several of the fruits we enjoy eating today have been around for millennia in much the same form. For example, archaeologists have uncovered evidence of 780,000-year-old figs at a site in Northern Israel, as well as olives, plums, and pears from the paleolithic era. Researchers have also dug up grapes that appear to be 7 million years old in northeastern Tennessee (although, oddly, the grapes are morphologically more similar to today’s Asian varieties than the modern grapes considered native to North America). Apple trees blanketed Kazakhstan 30,000 years ago, oranges were common in China, and wild berries grew in Europe. None of these fruits were identical to the modern varieties, but they would have been perfectly edible.
Vegetables are a different story. Many of the ones we eat today have undergone profound changes at the hands of human farmers. Consider the brassicas: Between 8,000 and 10,000 years ago, humans took a leafy green plant and, by selecting for different characteristics, began to transform it into several different products. Modern kale, cabbage, broccoli, cauliflower, Brussels sprouts, and kohlrabi are all members of the same species, derived from a single prehistoric plant variety. Wild carrots may predate human agriculture, but they’re unpalatable and look nothing like the cultivated variety. The earliest domesticated carrots were probably purple, and the orange carrot emerged in the 17 th century. While legumes predate the dawn of man, modern green beans are a human invention.
It’s not altogether clear why fruits have changed less than vegetables, but it might have something to do with their evolutionary purpose. Plants developed sugary fruits millions of years ago so that sweet-toothed mammals would gobble them up and disseminate the seeds. By the time hominids descended from the African tree canopy, delicious fruits were widely available with no need for artificial selection. Since vegetables gain nothing from being eaten, they didn’t experience the same pressure to evolve delectable roots, stems, and leaves.
Just because there are some paleolithic fruits in production today doesn’t mean you can easily mimic the paleolithic diet. Modern apples, dates, figs, and pears aren’t necessarily nutritionally equivalent to their late Stone Age ancestors. Selection by humans has made them larger and sweeter, and may have caused other chemical changes. Ancient man also ate plants that you can’t find at a grocery store, like ferns and cattails. His relative dietary proportions of meats, nuts, fruits, and vegetables are in dispute, and probably varied significantly with location. Some paleoanthropologists also believe hunter-gatherers ate a far wider variety of foods than modern man, each in a smaller quantity, to minimize the risk of poisoning.
Australian Food Timeline
Prior to European settlement, Australia’s indigenous people were primarily nomadic, moving from place to place to hunt and gather food. They had a deep understanding of the land, the seasons and the food sources. It is now recognised that they exerted considerable control over their environment, using methods including fire, taboos, grain harvesting and storage, fish and eel traps and some planting to ensure continuity of food supply.
Aboriginal people may have arrived on the Australian continent as early as 60,000 years ago. At that time, many species of megafauna still existed. There is some evidence that the arrival of humans caused or contributed to their extinction. Remains of these animals have been found close to aboriginal artifacts. Traces of blood and hair on the tools found match the megafauna species. The practice of “firestick farming” – the burning of the undergrowth – may also have contributed to the extinctions by reducing the available food for the large herbivores.
The aboriginal calendar defines the seasons according to the changing availability of fish, animal and food-plant resources. When the Europeans arrived, aboriginal food sources included kangaroos, wallabies, bandicoots, possums, lizards, other animals, and birds that were hunted with spears, boomerangs and stone axes. Firestick farming opened up pasture lands and encouraged new growth, attracting game animals and making them easier to capture.
Aboriginal people also collected yams and other plant roots, fruits, vegetables, seeds, leaves and honey. The cultivation of food crops was more widespread in the Torres Strait Islands, where bananas, taro, coconuts and yams were grown. However, there is evidence that, even where crops were not specifically planted, certain conservation measures were practised. For example, enough roots were left in the ground to produce new plants in the future. When eggs were gathered from nests, some were left behind.
In central Australia, witchetty grubs were commonly eaten. These are the larvae of several species of moth, found in the roots of certain shrubs and trees. In the alpine regions of New South Wales, aboriginal people would gather at certain times of the year to feast on Bogong Moths. The moths were ground to a paste between stones.
In other parts of Australia, Indigenous people constructed elaborate fish and eel traps in creeks and rivers. Fish were hunted with spears and nets. Middens along the ocean shorelines show that shellfish and crustaceans were also important aboriginal food sources.
Wild millet was the principle grain. It was a practice to harvest the grass while it was green and pile it in heaps to ripen. The heaps were then threshed to release the seeds. Early food technologies including grinding seeds to produce flour and processing poisonous cycad nuts either by leaching in running water or fermentation.
Because Australia’s aboriginal people ate a wide variety of foods, they were not dependent on a single food source. Their nomadic lifestyle was well-adapted to the changing environment, allowing them to move to areas where new foods could be found. >State Library Victoria:Indigenous foods
Maya Food & Agriculture
For the Maya, reliable food production was so important to their well-being that they closely linked the agricultural cycle to astronomy and religion. Important rituals and ceremonies were held in honour of specialised workers from beekeepers to fishermen, and maize, the all-important Mesoamerican staple, even had its own god. An agricultural society, 90% of the Maya population were involved in farming. Management of land and natural resources brought a more dependable harvest and varied diet, enabling economic growth. This allowed for the flourishing of Maya culture but eventual over-exploitation, an ever-increasing population, and protracted periods of drought may have been factors in the ultimate collapse of the Maya civilization.
The Maize God
One of the most important Maya deities, perhaps even the most important, was the 'Young Maize God'. Typically portrayed with a head in the form of an ear of maize, he could appear in Maya mythology as the creator god. Descending to the underworld, he reappeared with the world tree which holds the centre of the earth and fixes the four cardinal directions. The world tree was, indeed, sometimes visualised as a maize plant. One of the names of the Maya maize god was Yum Caax ('Master of the Fields in Harvest') but another, as at Palenque, was Hun-Nale-Ye ('One Revealed Sprouting'). If any further proof were needed of the Maya reverence for maize, one need only consult the Popol Vuh religious text, where the ancestors of humanity are described as being made of maize. Other important foodstuffs besides maize had their own gods, for example, Ek Chuah (aka God M) was considered the god of cacao and so vital was water to crops that the Maya rain god Chac gained special prominence, especially in times of drought.
Maya Agricultural Methods
The quality and quantity of agricultural land around Maya cities varied depending on their location. In the lowlands of the Peten and Puuk regions, for example, the soil was relatively fertile but restricted to small patches. A technique to increase soil fertility was the use of raised fields, especially near water courses and flood plains. At these locations stone-wall terraces were sometimes built to collect fertile silt deposits. Forests were cleared to make way for agriculture but such land quickly declined in fertility and necessitated slash-and-burn techniques to rejuvenate the land after two years of crops, which then requires on average a further 5-7 years to be ready for re-planting. A similar necessity to leave fields to rejuvenate was common in the highland sites, where plots had to be left empty for up to 15 years. To maximise productivity, crops were planted together such as beans and squash in fields of maize so that the beans could climb the maize stalks and the squash could help reduce soil erosion.
Those cities without access to large areas of land suitable for agriculture could trade with more productive cities. For example, slaves, salt, honey and precious goods such as metals, feathers, and shells were often traded for plant products. Just how larger plots of land were distributed, in what manner farmland passed on between generations, and the level of state management in agricultural production remain unclear. It is known, however, that many Maya private homes would have cultivated food in small gardens, especially vegetables and fruit. Once harvested, foodstuffs were stored in wooden cradles above ground and in subterranean sites.
Water management was another necessity, especially in certain Maya cities during the dry winters and hot summers. Water was collected in sinkholes created by collapsed caves and known as a tz'onot (corrupted to cenote in Spanish) and sometimes brought to fields using canals. Cisterns (chultunob) were also excavated, typically bottle-shaped and built using wide plastered aprons around their entrances to maximise the collection of rainwater.
Maya Crops & Food
Maize (milpa) was one of the most important crops but so too were root crops such as sweet manioc, beans, squash, amaranth, and chile peppers. Maize was typically boiled in water and lime, and eaten as a gruel mixed with chile pepper (saka') for breakfast or made into a dough for baking on a flat-stone (metate) as tortillas or flat cakes (pekwah) and as tamales - stuffed and baked in leaves.
Sign up for our free weekly email newsletter!
Animals which were hunted include deer, peccary, turkeys, quails, ducks, curassow, guan, spider monkeys, howler monkeys, the tapir, and armadillo. Dogs were also fattened up on maize and eaten. Fish were caught using nets, traps, and lines, and, as in certain Asian cultures, trained cormorants were used to help catch fish: The cormorants' necks were tied so that they could not swallow the bigger fish, which they would then bring back to the fisherman. Meat and fish were typically cooked in stews along with various vegetables and peppers. Fish was either salted and dried or roasted over an open fire.
Fruits eaten included guava, papaya, avocado, custard apple, and sweetsop. A frothy chocolate drink and honey were also popular desserts. Another very popular drink was pulque beer, known to the Maya as chih and made from fermented agave juice.
Important trees used by the Maya for their wood were the sapodilla and breadnut. The bottle gourd was cultivated to make containers from its hard but light-weight fruit shell. The copal was valued for its resin which was burned as incense and used for rubber. Finally, cotton was also cultivated, especially in the Yucatan province, famous for its fine textiles.