Category Archives: Food

The Fruit in Juicy Fruit

Jacob M. asks: Does juicy fruit gum actually contain any fruit juice?

juicy-fruit2With a brand recognition rate somewhere close to 99% in the States, it’s pretty safe to say that almost everyone reading this has at least heard of Juicy Fruit gum, if not also chewed it at some point. The question we’re looking at today is- exactly what fruit is Juicy Fruit supposed to taste like and does it actually contain any dehydrated juice from that fruit?

Now you’d think that answering this question would be as simple as picking up a pack of Juicy Fruit gum and reading the ingredients list, but as with most things in life, we discovered it’s just not that easy. For starters, the ingredients listed on a pack of Juicy Fruit are incredibly vague; the only real piece of useful information you can glean from a pack itself is that the gum contains “Natural and artificial flavors”. Helpful…

This also isn’t helped by the fact that Wrigley, the company who make Juicy Fruit, are similarly coy about discussing what goes into their product, often choosing to refrain from mentioning any specific fruit in regards to its flavour and excusing this evasive behaviour by stating that the flavour is a trade secret.

That said, with a little digging, you’ll find that, in the past, Wrigley has explicitly said that Juicy Fruit contains notes of “lemon, orange, pineapple and banana” in response to emails from curious customers asking for more specific information about Juicy Fruit’s flavour. Again though, this isn’t entirely helpful in discerning whether or not the gum actually contains fruit juice, since (always awesome) science has made it possible to synthesize almost any flavour we want.

JackfruitCuriously, there is a fruit out there known to taste almost exactly like Juicy Fruit- a lesser known fruit from the shores of Africa and Asia known as Jackfruit. Jackfruit tastes so much like Juicy Fruit gum that it is often one of the first things mentioned when it’s discussed by Western media and there is a small, but nonetheless dedicated subset of people who believe that it is the key secret ingredient in the gum.

However, although Jackfruit tastes like Juicy Fruit gum, this isn’t because Juicy Fruit contains any juice from the Jackfruit (a dead giveaway being that there are no records of Wrigely importing the fruit or juice). The real reason the two taste and smell so similar is because they both (probably) contain a chemical called, isoamyl acetate. The reason we have to say “probably” is because, as noted, Wrigley won’t confirm what exactly goes into making Juicy Fruit, which is their right as a company, but experts are still pretty sure that isoamyl acetate has something to do with Juicy Fruit.

One of the most compelling arguments for isoamyl acetate being the primary flavouring agent behind Juicy Fruit is that, like Jackfruit, the chemical is said to smell very similar to it. Even in literature that doesn’t mention Juicy Fruit by name, isoamyl acetate is said to have an indistinct, almost indescribably “fruity” smell that contains hints of banana, peach and other similar sweet fruits, which is pretty much the exact same way people who haven’t eaten Jackfruit describe Juicy Fruit.

Making this argument even more tantalising is that, historically, one of the few ways to obtain isoamyl acetate in commercially viable quantities was as a by-product of whiskey production. When Wrigley first began producing Juicy Fruit gum in 1893, they did so from a factory in Illinois, the biggest whiskey State in America at the time. Suggesting that, perhaps, Wrigley sourced isoamyl acetate, and hence Juicy Fruit’s unique flavour, from the many factories producing whiskey nearby.

juicy-fruitPerhaps the most damning piece of evidence of all about Juicy Fruit’s flavour being the result of artificially created chemicals instead of real fruit is that they themselves used to explicitly advertise the “artificial flavor” of their product as a unique selling point up until a few decades ago. You see, early packs of Juicy Fruit starting around the 1940s carried the slogan “The Gum With the Fascinating Artificial Flavor” which they used as a way of enticing customers to try it.

It is only in recent years, with the trend to avoid artificial chemicals in consumables, that Wrigley has shied away from advertising the fact that Juicy Fruit’s unique flavour is, in all likelihood, the result of artificially created chemicals rather than a cocktail of chemicals directly extracted from fruit.

If you liked this article, you might also enjoy:

Expand for References

Why is Ham Traditionally Eaten on Easter?

Rita H. asks: Why do we serve ham on Easter?

easter-hamUnder Jewish dietary laws (called kashrut), eating pork in any form is strictly forbidden. Jesus Christ was Jewish. So why, on the anniversary of his resurrection, do people traditionally serve ham? You’ll often read it’s because ham is supposedly a “Christian” meat, able to be consumed by Christians but not certain other prominent religious groups.  However, the real reason is simply because it’s in season.

While modern food storage techniques and supermarkets with efficient and worldwide supply chains shield us from this fact somewhat, like fruits and vegetables, different meats also have seasons, and these depend on a variety of factors including what the animals were eating and when, where they were in their growth cycle, and the availability (or lack) of refrigeration.

With pigs and cows, before refrigeration, it simply made sense to slaughter them in the fall. Since it takes a fair amount of time to butcher a beast as large as a hog or steer, the cold temperatures helped keep the meat from going bad before it could be properly prepared.

Another consideration was taste. Shortly before slaughter in the fall, hogs would be fed things like apples and acorns that would greatly improve the flavor of the meat they would ultimately provide. As one specialty pork producer noted:

The tradition of acorn-fed pork goes back millennia . . . . The oak nut was a diet staple because the pigs roamed and rooted about for acorns in the forests of Italy and Spain. An acorn diet is today best known as what makes the prized and pricey Jamón Ibérico of Spain so succulent.

Butchered in the fall, most hams were prepared and allowed to properly cure over the winter to further develop their flavor. This was a particularly important food source this time of year in some parts of the world where the rest of the stored meat would have already been eaten, with little other meat of any real quality available. This was the case in North America where the other traditional spring meat, lamb, was (and still is) less in vogue, which is also why eating ham on Easter in North America is much more popular than other regions where Easter is celebrated.

Conversely, in Europe, lamb is commonly served at Easter, and the tradition actually traces its origins to Jewish Passover feasts. This is also certainly fitting for Easter, with Jesus as the “lamb of God.” Born shortly before the holiday, young lamb may be slaughtered within 6 to 8 weeks, and thus offer a fresh, as opposed to cured, option for Easter protein when historically other such protein sources were scarce at this time of year.

If you liked this article, you might also enjoy:

Bonus Facts:

  • Eggs are popular at Easter, at least in part due to the fact that spring is the peak season for their production. As a result, eggs have been a part of spring celebrations since long before Easter was a thing; for example, decorated eggs have been a part of the Iranian New Year, Nowruz, (observed on the spring equinox) for millennia.
  • In addition to the refrigeration factor, like pigs, cows are also thought to be tastier when slaughtered in the fall. This is due apparently to the fact that “the frost has killed flies and sweetened the grass [so the] cows are more comfortable.” Among other things, cows slaughtered when fatigued or in distress also negatively impacts the shelf life of the meat.
  • Turkey is also tastiest in the fall because, as the weather cools and the days shorten, their hormone levels shift and they fatten up. On the other hand, since most chicken eggs are laid in the spring, those that are allowed to grow into chickens are traditionally slaughtered in the summer.
  • Easter bunnies have their roots in old German pagan traditions celebrating the goddess Eostra, who was honored for bringing spring and fertility on the spring equinox. Because of their fecundity, rabbits were used as her symbol.
  • If you’re wondering why rabbits are considered such prolific breeders, it has less to do with them getting it on more than many other animals, necessarily, and more to do with the time frames involved in the process of producing new rabbits.  A baby rabbit becomes sexually mature in an average of just about 5-6 months, and sometimes even sooner.  They can potentially live up to around 10 years.  Further, it takes only around a month from the point of getting pregnant for a female rabbit to give birth.  Their litters can include as many as a dozen rabbits!  What makes this even more astounding is that the female rabbit can get pregnant as soon as the next day after giving birth.  Rabbits are induced ovulators, so the females are pretty much ready to get pregnant anytime they mate (assuming they aren’t already pregnant), with the mating triggering the ovulation.  So even just a single female can give birth to several dozen baby rabbits per year.  Given this, combined with the fact that the babies are ready to make babies at the stage when most human offspring are still mostly just poop and drool factories, you can see how rabbits got this reputation.
Expand for References

Do Wine Makers Really Walk Over Grapes With Their Feet?

Melissa K. asks: Do high end wine makers really have people walk all over grapes with their bare feet?

woman-pressing-grapesPerhaps no image is more synonymous with the act of wine making than that of a person smushing grapes with their bare feet to extract the precious juices contained therein (in the grapes, not the inevitably sweaty feet). But did winemakers ever commonly do this?

The answer to this question largely depends on who you ask. Today, certain winemakers, usually ones that have some sort of a financial interest in it, (at least publicly) maintain that grape stomping was an integral part of winemaking history. However, historians tend to think it was a relatively rare practise. To be clear, nobody is saying that ancient people didn’t crush grapes with their feet to extract the juices; rather, it is known that man has had a much more efficient alternative to this method for at least 6000 years. We know this because in early 2011 archaeologists uncovered the remnants of an archaic winery, complete with a wine press dating back to 4000 BC in Armenia.

Wine itself can be traced back to at least 5400 BC, which would suggest that early man must have had a more rudimentary method of crushing grapes before someone invented a wine press; and, indeed, probably involved the use of feet. This is supported by the existence of numerous pieces of artwork and other references from history illustrating people curb stomping piles of grapes while they stood in giant vats. Perhaps the most prominent pieces come from ancient Egypt where it’s largely believed that stomping grapes was a common part of winemaking, as evidenced by numerous pieces of artwork depicting exactly that.

However it’s important to note that this was by no means the only step in the juice extraction process. You see, treading grapes is a remarkably inefficient method of extracting juice from them, and up until very recently in history, humans were all about not wasting anything food related (See: A Brief History of French Toast). After stomping grapes, the ancient Egyptians would then put the leftovers into a large sack, at which point: “Poles were tied to the sack’s four corners and by turning them the rest of the grape juice was squeezed out.”

A thing to keep in mind is that pressing grapes is a deceptively difficult task and the amount of pressure you use to squeeze grapes must be closely monitored to avoid accidentally releasing bitter tannins from the seeds, which can, obviously, negatively affect the taste of the final product. With this in mind, pressing grapes by using simple bodyweight seems like a good way to avoid applying too much pressure. However, it’s just too inefficient to be used on a mass scale unless you were earning Egyptian Pharaoh levels of money and had a fleet of slaves or workers to do it for you.

Not surprisingly, in almost every civilisation in which wine presses were used, there appears to be little evidence that they also stomped wine; they simply didn’t need to and had a much more efficient process in the wine presses. An exception to this can be found in Ancient Rome where grape stomping was common to extract the initial juices from grapes, which they believed to have special properties the rest of the juices did not. Even in this case, the Romans are still noted to have used presses to extract the bulk of the juices after this stomping took place.

As for why treading grapes seems to be so synonymous with winemaking in general, despite being inefficient, unsanitary, rarely used in history, and time-consuming, that may have a lot to do with the winemaking industry itself playing up to the allure of romanticised old timey imagery. If there’s one thing the wine industry is great at, it’s making wine seem grandiose, when, in the end, it’s just fermented fruit juice.

Beyond the mystique, there’s an entire industry based around selling people vacations during which they will stomp grapes to make wine “the traditional way”, even though it’s currently illegal in America to sell any wine made in this way for hygiene reasons and it has been for over a century. It’s just good marketing for the people advertising these events to play up to the fact that grape treading is a supposedly common ancient way of making wine, even though we’ve been pressing grapes in a vastly more efficient way for many millennia; and with extremely rare exceptions, these better methods were always how wine was made.

Another factor that has played into the popularity of wine treading is the now iconic 1956 episode of I Love Lucy, Lucy’s Italian Adventure, which features the titular character stomping grapes in an Italian winery. The episode led to a surge in interest in the practise, despite the fact that the Californian grape farmers who supplied the grapes for the episode did so under the condition that a character would explicitly mentioned that grapes aren’t really pressed by foot by wine makers. It should also be noted that, as alluded to, in rural Europe where most of the imagery surrounding this practise is sourced from, the use of feet has been wholly absent from the professional wine making process since the Middle Ages.

If you liked this article, you might also enjoy:

Bonus Facts:

  • During Prohibition, Grape growers of the day began selling “bricks of wine”, which were primarily blocks of “Rhine Wine”.  These often included the following instructions: “After dissolving the brick in a gallon of water, do not place the liquid in a jug away in the cupboard for twenty days, because then it would turn into wine.”
  • Although selling the wine produced as a result of grape treading has been outlawed in the states for about a century, this hasn’t stopped grape stomps becoming a popular way of celebrating the end of the harvest season in certain American towns.
Expand for References

Ketchup or Catsup?

Byron H. asks: Why is it sometimes catsup and other times ketchup?

ketchupThe two distinct spellings for what today is essentially the same condiment are simply the reflection of the evolution of nearly everyone’s favorite French fry topper. (Well, in certain regions of the world.)

Today often disdained as low-brow, when it was first conceived, ketchup was revered for the flavor it added to foods. Initially a paste made from fermented fish guts, it was first recorded in 544 BC in Important Arts for the People’s Welfare; according to the legend, while in the course of pursuing his enemies, emperor Wu-ti came across a pit filled with fish entrails and covered with dirt from which “a potent, delicious aroma,” could not help but emanate. For some reason, they actually ate it, learned to love it and at first named it Chu I. (Worcestershire Sauce also has a similar slightly stomach turning origin, only in this case is still made pretty much the same way it was in the beginning.) Eventually, Chu I evolved to kôe-chiap or kê-chiap.

Over time kôechiap migrated and made its way to Indonesia where it became known as kecap. British sailors were trading there by the 1690s, and it is believed they developed a taste for, and began exporting, kecap from the East Indies during this time.

Also in the 1690s, Brits began tinkering with the sauce’s name, and catchup, which, according to the OED, was a “high East-India Sauce” first appeared, while by 1711 others were using the spelling, ketchup. Nonetheless, it appears that both words were still describing a type of fish sauce . . . although by this time, rather than fish entrails, salted anchovies were fermented.

Anchovy ketchup quickly became popular in the West, and recipes for it can be found as early as 1732; by 1742, some recipes had even begun to put a definite British twist on the sauce by adding to the anchovies, “strong stale beer,” mace, cloves, pepper, ginger, and shallots.

Nonetheless, as anchovies were not always at hand, and necessity being the mother of invention, cooks in the west began experimenting with different ingredients to form the base of their ketchups. Two that quickly became popular were mushrooms and walnuts.

In 1747, Hannah Glasse included a mushroom ketchup recipe in The Art of Cookery, Made Plain and Easy, and it involved several steps including salting the mushrooms, then boiling, straining, skimming, straining (again), boiling with ginger and pepper, straining (again) and then bottling the mixture with mace and cloves.

Another 18th century variant, a family recipe from that of noted novelist Jane Austen, began with a paste of green walnuts which were then liberally salted and soaked in vinegar, strained, boiled, skimmed, boiled (again) with cloves, mace, ginger, nutmeg, pepper, horseradish and shallots, and then bottled. Vinegar, itself a preservative, eliminated the need for fermentation.

By the early 1800s, tomatoes had been established in English cuisine, and so, naturally, they were being used as a base for ketchup as well. In 1810, Alexander Hunter (a physician) in Receipts in modern cookery provided a recipe for a Tomata Sauce that called for first baking the tomatoes, removing the peel and seeds, adding chili vinegar, salt, garlic and shallot. Then it was boiled, skimmed, strained and bottled.

Catsup had arrived in the United States in the early 1800s as well. In his Cook’s Oracle (1830), William Kitchiner (also a physician) included several recipes for catsup that used alternately, walnuts, cockles, oysters and mushrooms as their bases. Interestingly, Kitchiner also used the word catchup throughout the book, but seemingly exclusively as a term for a mushroom-based sauce.

Kitchener also had a couple of recipes for sauces that could have been (but weren’t) called catsups: Love-apple Sauce and Mock Tomata Sauce. Both of these were highly spiced with shallot, clove, thyme, bay leaf, mace, salt and pepper, and the latter, which apparently used apple, also incorporated turmeric and vinegar.

It appears that catsup was the predominant term in the United States through the about the 1870s when ketchup began to appear. Some authorities claim that the distinction between the two marked the differences in quality, as ketchup, the term used in Britain, was usually seen on expensive, imported sauces; this claim is supported by the fact that Heinz sold two types of these condiments in the 1880s, a cheaper, Duquesne Catsup, and a more expensive, Keystone Ketchup.[i]

If you liked this article, you might also enjoy:

Bonus Facts:

  • Notably, while Heinz was selling both catsup and ketchup in glass bottles, the first to do so commercially was Jones Yerkes, who introduced the practice in 1837. Nonetheless, Heinz perfected it by first ensuring that it was in clear bottles so his customers could verify his product’s purity and, second, introducing the iconic shape so well-known today.
  • If you hold a traditional Heinz bottle at about a 30 degree angle and patiently tap the 57 stamped on the neck, eventually the ketchup will flow beautifully. Physicists have another method, although it requires an explanation (of course). Because of its composition of tomato, vinegar, water, syrup and spices, ketchup is a “Non-Newtonian” fluid whose thickness and stickiness can be changed with force. Therefore, essentially all you have to do is “shake the bottle beyond its breaking point, and ketchup becomes 1,000 times thinner,” and, therefore, easier to pour.
Expand for References

A Lesson in Failure- The Rise of the Mars Candy Company



The legendary Roald Dahl’s book Charlie & Chocolate Factory from 1964 (and its subsequent two film adaptations from 1971 and 2005) told the story of a magical candy factory and its eccentric and mysterious owner Willy Wonka. A chocolate river, gum that is a whole turkey dinner, never-ending gobstoppers, and, of course, the singing and dancing oompa-loompas are just a few of the surprises that waited inside the doors of the famously secretive factory. Of course, in a real life candy empire, there are a lot more failures, hard work, father/son disputes, and an unfortunate lack of oompa-loompas. What follows is the tale of how the Mars candy company went from a small candy business started by a polio stricken teen to one of the largest candy companies in the world.

The story of Mars candy starts in Newport, Minnesota (southeast of St. Paul) with the birth of Franklin Clarence Mars on September 23, 1883. Frank was the son of a gristmill operator (grinding grains into flour) who only moved to Minnesota from Pennsylvania with his wife, Alva, months prior to Frank’s birth. When Frank was little, he battled polio which left him disabled the rest of his life.

As you might imagine from this, he was rather immobile as a kid, so he spent a lot of time watching his mother bake and cook, including watching her go through the difficult and tedious process of making fresh chocolates. He got so into candy, that he began selling Taylor’s Molasses Chips and creating his own candy recipes while still in high school. By the time, he graduated, he had a pretty successful career going selling candy wholesale to stores in the Minneapolis/St. Paul area.

In 1902, he married Ethel G. Kissack, a schoolteacher. About a year later, Frank’s first son – Forrest – was born. It was also around this time that the candy market became oversaturated. With the Hershey Bar having been first introduced in 1900, the United States’ first mass produced candy bar, a host of other locally owned candy chains popped up. The competition was fierce, especially in the Minneapolis area. Brands like Chick-O-Stick, Pearson’s, and Cherry Hump started in Minnesota and all are still around today. So, it wasn’t a huge surprise when Frank’s wholesale business went under.

To add a little lemon juice to his fresh wound, in 1910, Ethel divorced Frank for being unable to support her. She also won sole custody of Forest, who she promptly sent to live with her parents in Saskatchewan, Canada. The ugliness of the divorce wasn’t a good omen for Frank and Forrest’s future relationship. They would rarely see each other until years later, with tensions still running high.

Frank, never a man to get too down, tried again, this time marrying another Ethel – Ethel V. Healy – and moving to Seattle, Washington to go back into the candy business. He failed again with wholesaling and creditors started taking his stuff.

He moved thirty miles south to Tacoma and again struggled.

In 1920, Frank and Ethel the second moved back to Minnesota to be closer to their families. At this time, Frank had only four hundred dollars to his name. But despite his constant struggles with candy, he continued to try, this time making his own at three am every morning with his wife doing the selling. The candy bar was the Mar-O-Bar, made out of chocolate, nuts and caramel. It was tough, but they started to make a little money and then a good amount more. After years of trying, Franks Mars had finally carved out a somewhat lucrative career in candy. They were even able to buy a house and would have been comfortable being local candy suppliers. But the invention of the Milky Way changed all of that.

It was also around this time that Frank’s son, Forrest, was establishing a mighty fine business sense. After attending college at Berkeley and, later, Yale, he became a traveling salesmen for Camel cigarettes. As the legend goes, in Chicago one night Forrest went a little overboard plastering ads across the city for Camel. He was arrested, but his estranged father bailed him out. While at a soda counter, Forrest looked into his chocolate malt glass and said, “’Why don’t you put a chocolate-malted drink in a candy bar?’”

Nougat had been invented in Italy in the 15th century (see: What Nougat is Made Of), but a variation of whipped egg whites and sugar syrup (instead of the normal honey) was invented by the Pendergast Candy Company in the early 20th century. They were based in, yes, Minneapolis and the nougat became known as “Minneapolis Nougat.” Frank Mars had started using nougat in his candies in 1920. In fact, he called his company “the Nougat House” for a time. But this time, in 1923, he mixed it with chocolate and put caramel on top of it. Using his cosmic name as inspiration, he called it a “Milky Way.” It was introduced in that same year. Within a year, Mars’ sales jumped by ten-fold, grossing about $800,000 (about $11 million today). Said Forrest later, “that damn thing sold with no advertising.”

Mars Company quickly launched into orbit. They moved their headquarters to near Chicago and by 1928, just five years after introducing the Milky Way, they were making $20 million in gross revenue (about $273 million today). In 1930, they introduced the Snickers bar (named after Frank’s favorite horse) and, soon after, the Three Musketeers.

Frank started living in grand fashion, buying fast cars, big houses, and a horse farm for his wife. Meanwhile, Forrest didn’t like what he saw. Knowing that there was more profit, and security, to be had by cutting costs and expanding the business into other areas, he tried to convince his father to give him a third of the company and let him expand to Canada (Forrest’s home country). Frank refused and Forrest, later recounting a conversation with his father, ”I told my dad to stick his business up his ass. If he didn’t want to give me a third right then, I said, I’m leaving.”

In the end, Frank gave Forrest $50,000 and foreign rights to the Milky Way to basically leave his company alone and move to Europe. Fortunately for the company, that is exactly what Forrest did.

While in Europe, Forrest learned from Switzerland’s Nestle chocolate company about how to make good, sweet, European-style candy. He tweaked the recipe of the Milky Way to make it more sweet. He called it the “Mars Bar.” It sold even better than the Milky Way in Europe, amassing Forrest his own considerable fortune.

Frank passed away in 1934, at the young age of fifty. His wife, Ethel, took over the company, then Frank’s half-brother , William L. (Slip) Kruppenbacher when Ethel was too ill to run it. In 1945, Ethel passed away. The company moved to the next of kin, the business savvy Forrest.

Forrest took over the company and immediately diversified, turning Mars into more than a candy company. He worked with a European pet food supplier and, eventually, created Whiskas Catfood. He worked with a Texas salesmen to create ready-to-make rice. That became Uncle Ben’s Rice. Besides being a brilliant money-making business man, he was known to have a violent temper and a demand for perfection. For example, he was known to throw chocolate bars out of windows if he felt they didn’t meet his quality expectations. Remarkably quickly, he turned a regional candy maker into a world-wide food empire.

Today, it is his three kids who are reaping the benefits. John, Forrest Jr., and Jacqueline. They are among the richest people in the world, each owning a third of Mars, Inc, which currently employs over 75,000 people and is valued at around $70 billion, making it approximately the sixth largest privately held company in the world.

If you liked this article, you might also enjoy:

Bonus Facts:

  • In 1941, Forrest Mars Sr. struck a deal with Bruce Murrie, son of famed Hershey president William Murrie, to develop a hard shelled candy with chocolate at the center.  Mars needed Hershey’s chocolate because he anticipated there would be a chocolate shortage in the pending war, which turned out to be correct. As such, the deal gave Murrie a 20% stake in the newly developed M&M; this stake was later bought out by Mars when chocolate rationing ended at the end of the war. The name of the candy thus stood for “Mars & Murrie,” the co-creators of the candy.
  • The “M&M” was modeled after a candy Forrest Mars, Sr. encountered while in Spain during his quasi-exile from Mars in the 1930s.  During the Spanish civil war there, he observed soldiers eating chocolate pellets with a hard shell of tempered chocolate.  This prevented the candies from melting, which was essential when included in soldiers rations as they were. Not surprisingly, during WWII, production of M&Ms skyrocketed due to the fact that they were sold to the military and included as part of United States’ soldiers’ rations. This also worked as great marketing; when the soldiers came home, many were hooked.
  • William Murrie, father of Bruce Murrie, was originally hired by Milton Hershey in 1896 as a salesman.  In his first week on the job, he managed to over sell the plant’s production capacity. This so impressed owner Milton Hershey, that he tabbed Murrie to be the future President of Hershey; this later happened in 1908, a position he held until retiring in 1947. So how did he do? When William Murrie first took over running Hershey, the gross annual sales tallied up to about $600,000 (about $15.5 million today).  Upon his retirement in 1947, he had grown the company to a gross annual sales amount of about $120 million (about $1.25 billion today); meaning over the span of those 39 years, he increased the annual sales rate at an astounding average of approximately 15% per year.
  • In the 1920s, Murrie tried to convince Hershey that they should produce a chocolate bar with peanuts.  Hershey didn’t like the idea, but let him go ahead as long as the bar wasn’t under the Hershey brand name.  And so, in 1925, the “Chocolate Sales Corporation”, a fictitious company Murrie came up with, debuted the “Mr. Goodbar”, which was wildly successful.
Expand for References

The Cheese in Cheez Whiz

Jesus M. asks: Is there any real cheese in cheese whiz?

cheez-whizAs America gets ready for their upcoming Super Bowl parties (or Royal Rumble party, if that’s your thing), Cheez Whiz – the yellowish-orange, gooey, bland tasting “cheese” product – will surely make an appearance at some of them. But what is Cheez Whiz? Why did get it invented? And is there really cheese in Cheez Whiz?

James L. Kraft was born in 1874 in Stevensville, Ontario on a dairy farm. When he was 28 years old, he immigrated to the United States, where he first chose Buffalo, New York to settle in. (Where a little over a half century later another common Super Bowl snack, the Buffalo Wing, was born. See: Who Invented Buffalo Wings)

Why he chose Buffalo (well over two hundred miles from his home in Ontario) over Detroit (under fifty miles away from Stevenson) isn’t known. In fact, there seems to be no real record at all of why Kraft went to Buffalo. But most important to this story, while there, he eventually invested in a small cheese company. He quickly rose up through the company and was invited to move to Chicago to run the cheese company’s branch there. After moving to Chicago, the company either went under or the heads of the company pushed Kraft out (records are conflicting as to what exactly happened there). Either way, Kraft was left stranded in Chicago, reportedly with little money (perhaps lending credence to the “went under” theory) and no job.

Using his meager remaining funds, he bought a horse (named Paddy) and a carriage. For the next few months before dawn every day, he would take Paddy and the carriage down to the wholesale market on Chicago’s Water Street and buy blocks of cheese in bulk. He would then sell it to the shop owners around town at marked up prices. His reasoning was that he was doing the hard part for them- finding and buying the cheese and then bringing it directly to the shop owners- and that was worth the markup. He was right. Within five years, Kraft’s business was successful enough that four of his brothers from Canada were able to come to Chicago and help James build his new cheese company. By 1914, they had incorporated as J.L. Kraft & Bros Company. That same year, they opened their first cheese factory in Stockton, Illinois. The next year, in 1915, they changed the cheese game.

While Kraft was the first to receive a US patent for processed cheese, he wasn’t the first to invent it. Walter Gerber and Fritz Stettler of Switzerland in 1911 experimented with their native Emmentaler cheese to see if they could increase the shelf life of cheese for export purposes. Their experiments included shredding, heating the cheese up to various temperatures, and mixing it with sodium citrate (still used as a food additive today) to produce a “homogenous product which firmed upon cooling.”

It is unclear if Kraft knew of these Swiss gentlemen, but, in 1916, he submitted for US Patent 1186524, which was titled “Process of sterilizing cheese and an improved product produced by such process.” In it, it describes a way, “to convert cheese of the Cheddar genus into such condition that it may be kept indefinitely without spoiling, under conditions which would ordinarily cause it to spoil, and to accomplish this result without substantially impairing the taste of the cheese.”

It goes on to explain the process of slicing, heating, and stirring cheddar cheese in great detail, how it needed to be heated to 175 degrees Fahrenheit for 15 minutes while being whisked continuously. The patent never mentions the addition of a sodium additive or “emulsifiers” (be it sodium citrate like the Swiss or a more general sodium phosphate). This likely is due to the fact that patents are, of course, public and whatever Kraft added to the mix, he probably wanted to keep it a secret from the competition, a fairly common practice in the food industry. This was the birth of commercialized processed cheese.

Kraft’s revolutionary new cheese product couldn’t have come at a better time for him, at least business-wise. When the United States entered World War I in 1917, there was a need for food products that would last and could be shipped long distances. By packing his cheese into 3-1/2 and 7-3/4 ounce tins, Kraft was able to become the cheese supplier to the US Army, earning himself a huge payday and a whole generation of soldiers trying out his cheese.

Flash forward about twenty five years, to 1952. Kraft Cheese, besides now having changed the name, was also now the number one cheese seller in the United States. (At the time, they were also selling other dairy products and even candy.) America was in the middle of the post-war economic boom and at the beginning of the “convenience culture,” when products that made life easier were highly sought after, which also gave rise to the TV Dinner. Towards this end, just two years before this, in 1950, Kraft developed a revolutionary convenience-oriented product, pre-sliced, pre-packaged cheese – the famous “Kraft Single.”

It was around this time that Kraft Cheese was doing great business in Britain, thanks to having sent processed cheese off to World War II with the allied soldiers. This bring us to a popular English dish called Welsh rarebit, which is basically a hot, melted cheddar cheese sauce poured over toasted bread – think an open-faced grilled cheese. While delicious, the cheese sauce is actually rather labor-intensive to make, requiring much time and careful stirring. Kraft, trying to appease their British customers, asked their team of food scientists led by Edwin Traisman (who would later help McDonald’s flash fry their french fries) to come up with a faster alternative for this cheese sauce. After a year and half of experimentation, they did. Cheez Whiz was introduced in Britain in 1952, and soon after across the United States.

Given its reputation, it might surprise you to learn that Cheez Wiz was, in fact, originally made with quite a bit of real cheese. However, very recently, this changed.

In 2013, Michael Moss, a writer for the National Post (a Canadian national newspaper), spoke with Dean Southworth, a member of Traisman’s team at Kraft in the 1950s that helped develop Cheez Whiz. Southworth, a huge fan of the original Cheez Whiz, said that the original was, “a nice spreadable, with a nice flavor. And it went well at night with crackers and a little martini. It went down very, very nicely, if you wanted to be civilized.”

However, in 2001, he settled down for a “civilized” evening of one of his favorite snacks- crackers, martini, and Cheez Whiz that he had purchased from the store that day.  Upon spreading the Whiz onto a cracker and taking a bite, he said he exclaimed to his wife, “My God, this tastes like axle grease!” Something had radically changed in this jar of Cheez Whiz from the last he had purchased.

Indeed, when he looked at the ingredients list, he saw as you’ll still see today- Cheez Whiz sold in the United States does not explicitly list cheese in the ingredients anymore. Rather, if you look, you’ll see 27 other ingredients, including whey (a protein byproduct of milk, the liquid left after the milk has been curdled and strained), corn syrup, and milk protein concentrate (a cheaper alternative to higher-priced powdered milk). When Moss and Southworth approached a Kraft spokeswoman about this in 2013, she told them there was actually still cheese in the Whiz, though much less than there was before. When asked just how much real cheese was still included in the product, she declined to comment.

She claimed the reason cheese wasn’t listed on the ingredients anymore was because the label already listed the necessary parts of processed cheese (i.e. milk, sodium phosphate, cheese cultures), therefore no need for “cheese” to be explicitly stated. At the end of the conversation, she explained, “We made adjustments in dairy sourcing that resulted in less cheese being used. However, with any reformulation, we work hard to ensure that the product continues to deliver the taste that our consumers expect.”

Mr. Southworth, of course, didn’t care for the new taste. In the end, the use of some of the ingredients of cheese, rather than cheese itself, has some business benefits. As Southworth said, “I imagine it’s a marketing and profit thing. If you don’t have to use cheese, which has to be kept in storage for a certain length of time in order to become usable… then you’ve eliminated the cost of storage, and there is more to the profit center.”

If you liked this article, you might also enjoy:

Expand for References

The Origin of the Oreo Cookie

Harry K. asks: Who invented the Oreo cookie?

oreo-shakeIn 1890, a group of eight large New York City bakeries combined to form the New York Biscuit Company and built a giant six-story factory in West Chelsea. Eight years later, they merged with their competitor, Chicago’s American Biscuit and Manufacturing to form an even larger conglomerate – the National Biscuit Company, but the factory and headquarters remained in Chelsea. In 1901, the National Biscuit Company put their abbreviated company name on a box of wafers for the first time – Nabisco. Soon, Nabisco became the company’s official name.

On April 2, 1912, the National Biscuit Company announced to their sales team that they were introducing three “highest class biscuits,” in a grouping they called the “Trio.” Two of the cookies, the Mother Goose Biscuit and Veronese Biscuit, didn’t sell particularly well and quickly disappeared from the shelves. The third, the Oreo Biscuit, did. “Two beautifully embossed chocolate-flavored wafers with a rich cream filling,” the Oreo Biscuit was sold in a yellow tin with a glass cover for approximately 30 cents a pound (about $7.13 today). While it went national in April, it was just a month before that the National Biscuit Company first registered the product with the US Patent and Trademark Office (registration number 0093009). It is commonly stated the given date of registration was March 6th, which is why that is National Oreo Day.  However, a simple patent and trademark search reveals that oft-repeated date is incorrect. In fact, it was actually filed on March 14, 1912 and registered on August 12, 1913.

hydrox-cookieSo how did they come up with the idea of the Oreo?  By using the time-honored business practice of stealing the idea from a competitor and then marketing it better than the original. You see, there was another popular creme-filled sandwich cookie that came before the Oreo, made by Sunshine Biscuits. Sunshine Biscuits was a company run by Joseph and Jacob Loose and John H. Wiles, the former of which were originally part of the great bakery conglomeration of 1898 (the one that formed into the National Biscuit Company).

Wanting a more personal approach to baking and not wanting to be lost in the bakery conglomerate, Loose liquidated his assets and helped form Sunshine Biscuits. (The company actually was the third largest cookie baker in the US when it was acquired in 1996 by Keebler. To this day, the Sunshine brand still appears on Cheez-its, among other products.)

In any event, in 1908, four years before the Oreo, Sunshine debuted the upscale, and soon to be very popular, Hydrox biscuit, which the Oreo was a pretty blatant rip-off of, cream filling, embossing and all. Of course, Nabisco denies this is where the idea for the Oreo came from, but the evidence at hand strongly indicates otherwise.

As for the name, there has never been a firm answer for why the National Biscuit Company chose “Oreo,” though there are several theories. There is speculation that “Oreo” is derived from the French word for gold – “or,” since the original packing was gold and the item was meant to be a “high-class” confectionery. It could also come from the Greek word for mountain or mound – “oros,” since an Oreo is a “mountain” of a cookie. It has also been speculated that maybe it was named for the cookie itself, two “O” shape cookies sandwiching the cream, O-cream-O.

The identity of the designer behind the distinctive emboss on top of each cookie – or what the emboss signifies – has also become part of the Oreo mystery. The first design was simple enough – with the name “Oreo” and a wreath at the edge. In 1924, the company augmented the original design to go with a 1921 name tweak – from “Oreo Biscuit” to “Oreo Sandwich.” The 1924 design added a ring of laurels and two turtledoves. Twenty years later, in 1952, is when today’s elaborate, beautiful, design first appeared.

But what does the design signify, if anything? Historians believe the circle that encases the word “oreo” with antenna-type symbol on top was an early European symbol for quality. Cookie conspiracists believe that the antenna symbol is actually a Cross of Lorraine, a symbol identified with the famed Knights Templar. The “four-leaf clovers” that surround the name could be just that or it could be the cross pattee – a geometric pattern of four triangles radiating outwards that is also associated with the Knights Templar and the Freemasons. It’s up to the individual what they want to believe, but this author thinks the Oreo cookie is a delicious Da Vinci Code style map leading to a treasure buried a thousand years ago… Or as I like to call it, the probable plot to National Treasure 3.

Now, who designed the emboss? Evidence points to William Turnier. However, while Nabisco admits that a man by the name of William Turnier worked for them for fifty years, they deny that he developed the 1954 design. That said, his son and drawn proof indicate otherwise. Turnier joined the company in 1923, working in the mail room. He eventually worked his way up to the engineering department, helping make the dies that made the cookies, the industrial-sized cookie cutters as it were.

So where’s the evidence? In the home of Bill Turnier, William’s son, perched on a wall is a framed 1952, line drawn blueprint of the modern Oreo design. (If you’re curious, Why Blueprints are Blue) Underneath the blueprint, it is written “Drawn by W.A.Turnier 7-17-52,” two years before the design would find itself on the Oreos sold in stores. Despite this evidence, the Kraft (who now owns Nabisco) Corporate Archives only says that Turnier was a “design engineer” and he received a Suggestion Award in 1972 for an idea “that increased the production of Nilla Wafers on company machinery by 13 percent.” So can Bill shed any light on what his father was thinking when he seems to have drawn the design? Not really, though he did admit that the design, while beautiful and resembling more mysterious symbols, probably had nothing to do with the Knights Templar. His father wasn’t a Mason either.

As for the stuff between the intricately-designed cookies, the filling- it was made partially of lard – pig fat – until 1997. In 1994, Nabisco embarked on a nearly three year revamping process of the filling to take the lard out. In charge of this was Nabisco’s principal scientist Sam Porcello, otherwise known as “Mr. Oreo.” By that point, Porcello was already a cookie legend, holding five Oreo related patents, including Oreos encased in white and dark chocolate. By December 1997, the Oreo cookie was lard-free, but there was another problem – the lard had been replaced by partially hydrogenated vegetable oil; yes, the very much not good for you trans fats. As the Chicago Tribune put it, “Later, research showed that trans fat was even worse for the heart than lard.” Finally, in January 2006, healthier (and more expensive) non-hydrogenated vegetable oil was put into Oreos instead. Today’s filing is additionally made with loads of sugar and vanilla extract creating a cookie that still is delicious, but slightly better for you. Or, perhaps more aptly, less bad for you.

If you liked this article, you might also enjoy:

Bonus Facts:

  • Bill also says that his father created or tweaked other well-know Nabisco designs in his half century with the company, including tweaks on the Nutter-Butter, the Ritz Cracker, and a dog’s favorite treat, the Milkbone.
  • The basic Oreo cookie is 71 percent cookie, 29 percent cream filling.
Expand for References

Why Do Mentos and Diet Coke React?


If you’ve ever wondered why Diet Coke and Mentos react so strongly to one another, well, wonder no more.

To start, it should be noted that it’s not just Diet Coke and Mentos that “react”; other carbonated beverages will also readily respond to the addition of Mentos.  What’s going on here is that Mentos has thousands of small pores on its surface disrupting the polar attractions between the water molecules, creating thousands of ideal nucleation sites for the gas molecules in the drink to congregate. In non-sciency terms, basically, this porous surface creates a lot of bubble growth sites, allowing the carbon dioxide bubbles to rapidly form on the surface of the Mentos.  If you use a smooth surfaced Mentos, you won’t get nearly the reaction.

The buoyancy of the bubbles and their growth in size will quickly cause the bubbles to leave the nucleation site and rise to the surface of the soda.  Bubbles will continue to form on the porous surface and the process will repeat, creating a nice foamy result.

In addition to that, the gum arabic / gelatin ingredients of the Mentos, combined with the potassium benzoate, sugar or (potentially) aspartame, in Diet sodas, also help in this process.  In these cases, the ingredients end up lowering the surface tension of the liquid, allowing for even more rapid bubble growth on the porous surface of the Mentos: higher surface tension = more difficult environment for bubbles to form.  (For your reference, compounds like gum arabic that lower surface tension are called “surfactants”).

As to why diet sodas like Diet Coke produce such a bigger reaction, it’s because aspartame lowers the surface tension of the liquid much more than sugar or corn syrup will.  You can also increase the effect by adding more surfactants to the soda before you add the Mentos, like adding a mixture of dishwasher soap and water.

Another factor contributing to the size of the geyser is how rapidly the object causing the foaming sinks in the soda.  The faster it sinks, the faster the reaction can happen, and faster reaction =  bigger geyser; slower reaction may release the same amount of foam overall, but also a much smaller geyser.  This is another reason Mentos works so much better than other similar confectioneries.  Mentos are fairly dense objects and so tend to sink rapidly in the liquid.  If you crush the Mentos, so it doesn’t sink much at all, you won’t get nearly the dramatic reaction.

Yet another factor that can affect the size of the Mentos / Coke geyser is the temperature of the soda itself. The higher the temperature, the bigger the geyser due to gases being less soluble in liquids with a higher temperature.  So, basically, they are more “ready” to escape the liquid, resulting in a faster reaction.

Note that while caffeine is often cited as something that will increase the explosive reaction with the soda, this is not actually the case, at least not given the relatively small amount of caffeine found in a typical 2-liter bottle of soda generally used for these sorts of Diet Coke and Mentos reactions.  If you add enough caffeine, you will see a difference, but the levels required here to see a significant difference are on the order of the amount that would kill you if you actually consumed the beverage. (See: How Much Caffeine Would It Take to Kill You)

You’ll also sometimes read that the acidity of the soda is a major factor in the resulting geyser.  This is not the case either.  In fact, the level of acidity in the Coke before and after the Mentos geyser does not change, negating the possibility of an acid-based reaction (though you can make such an acid based reaction using baking soda).

If you liked this article and the Bonus Facts below, you might also like:

Bonus Facts:

  • While you’ll sometimes hear an urban legend that people have died from drinking Diet Coke and eating Mentos, to date there has not been a single documented instance of this ever happening.  This is likely for two reasons.  First, the act of drinking soda releases quite a bit of the carbonation in it, limiting the possible effect.  Second, even if one did get a strong reaction to eating and drinking Mentos and Diet Coke at the same time, you’d likely just quickly vomit up the foam, which there have been numerous recorded instances of.  On a similar note, birds will not have their stomachs blow up if you feed them dried rice or Alka-Seltzer.
  • As an aside, while I personally have never tried drinking Diet Coke and eating Mentos, I have had a similar experience after taking a new kind of  multivitamin I’d not tried before, combined with drinking a 16 ounce container of Dr. Pepper.  Within a couple minutes of taking the vitamin (after eating and consuming the Dr. Pepper), I noticed I started to feel like I was going to throw up.  I had not at that point thrown up in about 15 years, so this was bizarre.  To keep my streak alive, I attempted, vainly, to keep the contents of my stomach down.  Ultimately, the pressure became too much and I threw up a ton of foam (red, like the multivitamin coating).  It seems likely that the surface of this vitamin must have been porous and it did most likely also contain at the least the gum arabic.  As I had not chewed it before swallowing, it found its way to the still somewhat carbonated liquid (although much less so having drunk it) and produced enough foam to overfill my already somewhat full stomach from dinner.  So let that be a lesson to you.  Certain types of multivitamins and soda also produce a nice foamy reaction.  I’ve also noticed that if you suck off the chocolate of a Snickers bar and then chew it, and swallow, then very quickly afterwards drink some soda, you’ll also get a nice foamy effect in your mouth.  Science!
  • The current world record for the most Mentos / carbonated beverage geysers to be set off simultaneously happened on October 17, 2010 and included 2,865 such geysers.
  • The name “Coca-Cola” was suggested by the creator of Coke, Dr. John Pemberton’s, bookkeeper, Frank Robinson, stemming from the two key ingredients: extracts from the coca leaf and kola nut. Robinson was also the one to first pen the now classic cursive “Coca-Cola” logo.
  • While there were initially different versions of Coca-Cola being sold (depending on the manufacturer, of which there were three primary businesses Pemberton had sold the formula to), all the versions contained cocaine, with some estimates of up to nine milligrams of cocaine per serving.  However, Asa Candler, who eventually finagled exclusive rights to Coca-Cola, claimed that his formulation included only around 1/10th the original formula amount of cocaine and by 1903 he removed cocaine from Coca-Cola by using “spent” coca leaves leftover from the cocaine extraction process.  This still resulted in Coca-Cola having trace amounts of cocaine though.  They’ve since got around this by using cocaine-free coca leaf extract.  The company that prepares this extract, Stepan Company in Maywood, New Jersey, also legally makes cocaine for medicinal purposes.
  • The term “soda-pop” was a moniker given to carbonated beverages due to the fact that people thought the bubbles were produced from soda (sodium bicarbonate), as with certain other products that were popular at that time.  A more correct moniker would have been “carbonated-pop”.
  • The “pop” part of the term came about in the early 19th century, with the first documented reference in 1812 in a letter written by English poet Robert Southey; in this letter he also explains the term’s origin: “Called on A. Harrison and found he was at Carlisle, but that we were expected to supper; excused ourselves on the necessity of eating at the inn; supped there upon trout and roast foul, drank some most admirable cyder, and a new manufactory of a nectar, between soda-water and ginger-beer, and called pop, because ‘pop goes the cork’ when it is drawn, and pop you would go off too, if you drank too much of it.”
  • While in the beginning carbonation was added to drinks because it was thought it was beneficial to the human body, today carbonation is added for very different reasons, namely, taste and shelf life.  Carbonating beverages, introducing CO2 into the drink mix under pressure, makes the drink slightly more acidic (carbonic acid), which serves to sharpen the flavor and produces a slight burning sensation.  It also functions as a preservative, which increase the shelf life of the beverage.

The Momentous Peanut Butter Hearings



An average American child eats about 1,500 peanut butter and jelly sandwiches prior to graduating from high school. That is about a sandwich every four or five days. Americans eat a lot of peanut butter. Besides it being popular and delicious, peanut butter also has had a tremendous impact on how foods are made and labeled today. Thanks to the “Peanut Butter Hearings,” we can now be reasonably sure what we think we are eating is actually what we are eating.

Contrary to popular belief, peanut butter was not invented by George Washington Carver. For instance, around the 14th and 15th centuries, the Aztecs of Mexico made peanut paste by mashing up roasted peanuts, and its possible peanut butter pre-dates this. More recently, while Carver developed innovative ways to cultivate, use, and grow peanuts (among numerous other innovations in various areas), it was actually Canadian and Montreal-native Marcellus Gilmore Edson who applied for US patent 306727 in 1884, when Carver was about 20 years old. The patent described a process of milling roasted peanuts until the peanuts reached “a fluid or semi-fluid state” to form a “flavoring paste from said peanuts.” In other words, peanut butter.

In 1898, John Kellogg (Yes, of cereal fame) received a patent for a “process of producing alimentary products” in which he improved this food item by using boiled peanuts (instead of roasted) that turned the paste into the same consistency as “hard butter or soft cheese.” Kellogg thought so highly of his new product that he served it to the residents of his religious healthy-living sanctuary, and somewhat infamous, Battle Creek Sanitarium.

It was at C.H. Sumner’s concession stand at the 1904 World Fair in St. Louis when peanut butter was first introduced to a mass audience. He, apparently, sold over $700 of peanut butter there (about $18,000 today), while also selling other recently introduced (at least to a world-wide audience) foods like hot dogs in buns and ice cream in cones. Krema Products in Columbus, Ohio began mass producing peanut butter in 1908 and, to this day, is the oldest peanut butter manufacturing company still around. By the time peanut butter and jelly sandwiches became a mainstay in the American soldiers’ diet during World War II (see: The Surprisingly Short History of the Peanut Butter and Jelly Sandwich), peanut butter had already became a staple in every American’s kitchen cabinet.

The United States government first became (at least, officially) concerned about what was put into their citizens’ food way back in 1862 when President Abraham Lincoln appointed Charles M. Wetherill to oversee the chemical division of the newly formed Department of Agriculture. A noted chemist himself, Wetherill’s first project was “a chemical study of grape juice for winemaking,” to decide if adding sugar to increase alcohol content should be considered “adulteration.” Wetherill determined it was not. The study also alluded to “problems of food preservation and uses of chemical preservatives.”

The Pure Food and Drug Act of 1906 (or, as it was more widely known then, the Wiley Act) was the next big step in protecting American consumers from mislabeled food (or not labeled) food products. Pioneered by chief chemist of the Department of Agriculture and crusader against food adulteration, Harvey W. Wiley, the act fought against false advertising, mislabeling, and adulteration of foods and medicines. It also prevented interstate commerce of items that were not properly labeled. The opening line of the act reads,

“For preventing the manufacture, sale, or transportation of adulterated or misbranded or poisonous or deleterious foods, drugs, medicines, and liquors, and for regulating traffic therein, and for other purposes.”

Soon, the FDA – the Food and Drug Administration (it was under several different names until 1927) – was formed to regulate and uphold the law.

While the FDA did an admirable job trying to regulate labeling and prevent food “adulteration” (the addition of a non-food item to increase the weight/quantity of the food item, which may result in the loss of actual quality of food), manufacturers found loopholes. As the FDA points out, the frozen foods industry that cropped up and prospered after World War II consumed a lot of the FDA’s energy and manpower. With new products like frozen TV dinners (See: The Origin of the TV Dinner), freeze dried coffee, and “instant chocolate drink” being introduced to the market, the FDA had to figure out what constituted food adulteration and what a label on these types of foods should and should not say.  Due to this lack of manpower, the FDA at this time allowed manufactures of foods that already existed to tweak their recipe without needing the FDA’s approval. This is how we get to the infamous peanut butter hearings.

Since the 1940s, the peanut butter industry had been asking the FDA if the addition of glycerin (a sugar alcohol that can act as a sweetener and food preservative) constitutes food adulteration. The FDA responded that peanut butter “is generally understood … to mean a product consisting solely of ground roasted peanuts, with or without a small quantity of added salt.” So if glycerin is added, it has to be on the label.

With Jif-brand peanut butter entering the market in 1958 and quickly becoming a huge competitor to the other main peanut butter brands, Skippy and Peter Pan (all still exist today), the manufacturers found other ways to grow their bottom line and still put “peanut butter” in a jar. For instance, prior to the late 1950s, the hydrogenated oil that was used to provide consistency was peanut oil. In 1958, manufacturers began using other, cheaper, hydrogenated oils like cottonseed, rapeseed, canola, and soy, instead of peanut oil in their peanut butter.

Jif, in an effort to overtake Skippy and Peter Pan, added sweeteners and reduced their actual peanut content to improve the flavor and increase the profit margin. According to a lab study (granted, by a lab run by Skippy’s parent company, Best Foods), Jif peanut butter contained 25 percent hydrogenated oil and only 75 percent actual peanuts. This greatly concerned the FDA and other consumer groups.

In 1958, the FDA administrated the Food Additive Amendment which established that chemicals or substances that can be “generally recognized as safe” can be used in food without further testing. The Delaney Clause said that if it doesn’t cause cancer in man or animal, the additive can be used. Of course, this would lead to big issues down the road, but it allowed them to set standards on the amount of hydrogenated oil used in peanut butter.

A 1959 press release said that mass produced peanut butters on average had reduced their peanut content by about 20 percent, which wasn’t appropriate. In response, the FDA set the standard at 95 percent peanuts and 5 percent of  “optional ingredients including salt, sugar, dextrose, honey, or hydrogenated or partially hydrogenated peanut oil” in order for it to be called “peanut butter.” This did not sit well with the peanut butter industry, as pointed out by Consumer Report, “The Peanut Butter Manufacturers Association, whose members did not want to miss out any cost-cutting opportunities, opposed this standard.”

For the next 12 years (yes, years), the peanut butter case and the subsequent hearings (the Peanut Butter Hearings) would embroil the FDA and peanut butter manufacturers in a heated courtroom feud. Back and forth they went, negotiating peanut percentages to determine when something stopped being “peanut butter” and started just being a “peanut spread.”

In 1961, the FDA agreed to roll back the percentage to 90 percent to hurry along a compromise, but the manufacturers still disagreed. So, the FDA instead announced that the “issue warranted further study.” Negotiations continued for another 10 years. The FDA’s own history of the case comments that, “A prominent attorney on the case wryly observed that the peanut butter standards put many lawyers’ children through college.”

In 1965, dramatic public hearings were held, with the high-powered peanut butter manufacturer lawyers on one side and consumer activist groups, which were very much encouraged by the FDA, on the other, led by Ruth Desmond (who had become known as the “Peanut Butter Lady”). The hearings were sensational, full of narrative (like Desmond making dinner for her husband every day before she went to court), widely covered by the press, took five months, and produced over 8,000 pages of transcript. Still, it would take another five years for the matter to be settled.

In 1968, the FDA stated that their findings determined the line between what is a peanut butter and what is a peanut spread was at 90 percent peanuts, 10 percent additives.  After a long appeals process, the new standard went into effect on May 3, 1971. So after vast sums of tax payer dollars spent and years of legal rambling, from that point forward, peanut butter officially had to be 90 percent peanuts. If it wasn’t, it could still be sold, but it had to be called “peanut spread,” rather than “peanut butter.” The same went with certain other foods, like jellies or jams which were also required to meet similar types of standards. This standard still remains today.

In the book, Creamy and Crunchy: An Informal History of Peanut Butter by Jon Krampner, the FDA official who was in charge of arguing the FDA’s case, Ben Gutterman, commented that, “If we had said eighty three, they’d have gone to eighty. They were saying ‘Nutritionally, it’s the same. Price-wise, it’s the same.’ We were asking, ‘but when does it stop being peanut butter?”

In the end, the lengthy and extremely expensive battle over what constitutes peanut butter vs. peanut spread resulted in shifting views concerning the food standards program in the United States, which in turn spurred on new regulations for food labeling, as well as a General Counsel being convened to make sure food regulation practices would not interfere with the creation of new types of food products.  As law professor Richard Merrill noted, “We conclude[d] that regulation should shift away from controlling food composition and focus on providing consumers with more complete information about foods.”

If you liked this article, you might also enjoy:

Bonus Facts:

  • Pringles were originally called “Pringles Newfangled Potato Chips.” However, Pringles contain only about 42% potato based content, with most of the rest being from wheat starch and various types of flour, including from corn and rice.  Thus, the U.S. Food and Drug Administration made them change the name because their product didn’t technically meet the definition of a potato chip. So they were only allowed to use the word “chip” in very restrictive ways.  Specifically, if they wanted to continue to use “chip,” they were only allowed to say “Pringles Potato Chips Made From Dried Potatoes.” Not being too fond of this requirement, the company changed the name slightly, using “potato crisps,” rather than “potato chips.”  Today, of course, most people just know them as “Pringles.”
  • While Proctor & Gamble initially argued that Pringles were in fact “chips” in the U.S., they took a different tact in the U.K.  In order to avoid a 17.5% Value Added Tax (VAT) in the U.K., Proctor & Gamble stated that Pringles should be considered a cake, rather than a “crisp.” Their argument was that since only 42% of the product was made from potato and the fact that it is fashioned from dough, that it should be considered a cake and not be subject to the tax put on chips. After all, that’s why the U.S. Food and Drug Administration had previously made them change from being a chip to a “crisp.” The company initially won in High Court and were briefly considered a cake in the U.K. However, Her Majesty’s Revenue & Customs appealed the decision and, in 2009, the ruling was reversed and the company had to start paying the VAT.
Expand for References

What Makes Peanut Butter Sticky?

Mark. K. asks: Why does peanut butter stick to the top of my mouth?

peanut-butter-and-jelly Arachibutyrophobia is a proposed humorous name for the fear of peanut butter getting stuck to the top of your mouth, coined by Charles M. Schulz in a 1982 edition of his famed Peanuts comic strip. But why does peanut butter have such a tendency to get stuck in your palate when so many other foods don’t?

As it so happens, peanut butter contains a perfect storm of ingredients seemingly designed with the express intent of creating a peanut flavoured choking hazard. For starters, peanut butter, shockingly enough, contains a lot of peanut oil, which makes it incredibly difficult for your saliva to perform its normal task of helping in this first stage of processing food; as we all know, oil and water don’t really like to mix.

Peanut butter also contains a lot of protein (about 25% by mass, give or take depending on the brand). Protein has a tendency to soak up moisture via osmotic pressure, thus absorbing much of the saliva in your mouth. This effect is made worse than with most foods due the fact that peanut butter also has incredibly low water content (only about 2%. For reference, jerky, which is by its very definition “dried meat”, tends to have a water content of about 23%.)  It’s also worth noting that bread can similarly absorb some of the moisture from your mouth, which combined with the peanut butter, can leave your mouth incredibly dry when eating a peanut butter sandwich.

But wait, there’s more. In addition to the oil and lack of moisture to wash down the peanut butter with, there’s a process in the food industry known as “supercritical fluid extraction” which uses CO2  to separate chemicals and compounds. SFE can be used for a multitude of tasks in the food industry, like decaffeinating coffee and extracting essential oils and flavours from plants and foods. In regards to peanuts specifically, it can be used during the manufacturing process of peanut butter to help keep the oils and solids from separating without the need for other more traditional stabilizers like unhealthy hydrogenated oils. And, as this paper in the Journal of Food Science explains, one side-effect of SFE in peanut butter manufacturing is that it increases the relative “adhesiveness” of the peanut butter.

Another thing that makes peanut butter so deliciously sticky is its texture, or lack thereof. Smooth peanut butter is noted as being far more likely to stick to the roof of your mouth than the chunky kind because the smooth, thick, deliciously creamy surface can form a “suction cup” of sorts that can anchor the peanut paste to a person’s palate. Have you ever got your foot stuck in mud?  Same type of thing here.

If you liked this article, you might also enjoy:

Bonus Facts:

  • Joseph L. Rosenfield in 1928 invented the churning process that gives peanut butter the smooth texture we have today.  He originally licensed this process to Pond Company, who makes the meltingly good Peter Pan peanut butter.  In 1932, he started his own peanut butter company which he named Skippy.
  • Experiments have shown that the levels of peanut oil and peanut seeds in peanut butter have a marked effect on its overall “adhesiveness” and “spreadability”, with higher oil and seed content resulting in more spreadable, but more adhesive peanut butter and vice versa.
  • Chocolate spread, more specifically Nutella, has a lower water content than peanut butter (1%) but it contains far less protein, showing how much difference the protein makes to the “stickiness” of peanut butter.
  • For those living outside of the PB&J eating world, here are a couple pro-tips if you ever decide to try America’s favorite gooey sandwich. In order to keep the bread from getting soggy from jelly over night or throughout the day in a lunch box, put peanut butter on both slices of bread and then put a skiff of jelly inbetween. The oil in the peanut butter will keep the moisture in the jelly away from the bread.  Another PB&J pro-tip: toast the bread first, then immediately add peanut butter to both sides (preferably an ultra-creamy type like Peter Pan or Skippy) with your preferred amount of jelly in the middle. Next, place the slices together and allow the peanut butter a little time to melt an the jelly to warm, then eat it while it’s still warm.
Expand for References

Tapioca and Cyanide



Little pearls swimming in a creamy custard flavored with vanilla or lemon, many of us have fond (and others not so fond) childhood memories of tapioca pudding. Although this staple dessert of the 1970s went out of vogue for a while, today it’s making a comeback. You may not know, however, that the tapioca we use is a refined product whose parent plant is filled with dangerous toxins that, absent proper preparation, can result in cyanide poisoning and possible death.

Cassava, the plant from which tapioca is made, was one of the first domesticated more than 12,000 years ago in South America. Migrating northward, it became a staple crop for people throughout the pre-Columbian Americas. Taken to Africa by the Portuguese, today it is the third largest source of carbohydrates in much of the world, after rice and maize.

Hardy and nutritious, cassava, also called yuca (which is distinct from the yucca plant), refers to the shrub as well as the starchy root that is harvested for food. There are different varieties of cassava, but generally they are split into two general classifications: sweet and bitter.

Although both are toxic, bitter cassava may have as much as 400 mg of cyanogenic glycosides per kilo, potentially 8 times more toxic than sweet cassava.

Cyanogenic glycosides are present in a startling number of plants cultivated for human consumption, and more than 2,000 known plants total. Not inherently toxic, cyanogenic glycosides are transformed within humans and animals after the plant tissue has been macerated, when enzymatic hydrolysis by beta-glucosidase releases hydrogen cyanide, the chemical that is toxic to people. (Cyanide poisoning works by not allowing the body to use oxygen, mainly via inhibiting the cytochrome c oxidase enzyme.  So the blood remains oxygenated after it passes through your body and back to the lungs.  Thus, it causes the body to suffocate, even though a person is otherwise breathing normally.)

There are several types of these cyanogenic glycosides, including amygdalin, dhurrin, linamarin, lotaustralin, prunasin and taxiphyllin, and they are found in some pretty common foods including, respectively, almonds, sorghum, cassava and lima beans, stone fruits (think peaches, plums, apricots and nectarines) and bamboo shoots. We don’t get sick from eating these products because either by the time they reach us, the toxins have been eliminated (e.g., blanched almonds and canned, prepared bamboo shoots), or we don’t eat the toxic part (e.g., the pit of the stone fruit where the poison precursor resides).

To prepare cassava for consumption, for the sweet variety, mere peeling and thorough cooking is all that is required. However, with the bitter variety, not only is it peeled, but the root is then grated and soaked in water for long periods to leach out the poisons. In addition, the grated bitter root is allowed to remain in water until it ferments, then it is thoroughly cooked, where this last step in the process finally releases the remainder of the dangerous compounds.

Properly processed, cassava is eaten grated, as chips, and frequently ground into flour and baked into crackers and breads. To make the pearls seen here in the U.S., the moistened starch is pressed through a sieve, and depending on the intended use (such as in pudding or in drinks) the size may be either small or large.

However, when the plant is not properly treated, cyanide poisoning can occur. Symptoms include a drop in blood pressure, rapid pulse and respiration, headache, dizziness, pain, vomiting, diarrhea, confusion and even convulsions. A lethal dose is in the range of 0.5 to 3.5 mg per kilo, with children being particularly at risk due to their small size (and big appetites).

Sometimes, relatively low doses of toxins remain in prepared cassava, such that people are unaware, at least at first, that they are consuming it. This may lead to chronic cyanide intoxication, which can result in thyroid and neurological problems, among other issues.

If you liked this article, you might also enjoy:

Bonus Facts:

  • With the sweet variety of cassava (the type carried in American supermarkets), proper preparation is easy to achieve at home. First, cut off the tapered ends and cut the tuber into 4″ to 6″ segments. Using a sharp knife, stand each cylindrical segment up and cut away the peel. Next, slice each segment in half lengthwise, and each half again, so you end up with four, 4″ to 6″ long wedges. You can now slice off the woody inner core. At this point, the plant may be boiled, sautéed, fried or roasted – just as long as it’s thoroughly cooked. Some favorite preparations include yuca fries, empanadas, “little spiders” (fried shavings), fritters and cassava bread.
  • Tapioca pudding is also easy to make. Alton Brown suggests you start the night before and- Place in a mixing bowl: 3.5 oz large pearl tapioca (about ½ cup) and 2 cups cold water. Next, cover and let stand overnight. In the morning, drain the tapioca and place it in a slow cooker with 2.5 cups whole milk, ½ cup heavy cream, and a pinch of salt. Cook on the high setting for two hours, stirring every now and then. At the same time, separately, whisk together: 1 egg yolk and ⅓ cup sugar. After that’s combined, you need to temper the egg mixture (this means to mix just enough hot tapioca mixture into it, a little bit at a time, so the egg and liquid combine but the eggs don’t scramble). Start by incorporating a dab at a time, and continue until you have added 1 cup of the tapioca mixture. Now the eggy-tapioca cream may be added to the slow cooker and incorporated. At this point, Alton suggests incorporating “Zest from 1 lemon” Finally, transfer the pudding to a bowl and cover it with plastic wrap (this should touch the surface to prevent a skin from forming). Refrigerate for at least an hour until completely chilled.
Expand for References

Why Sugar Doesn’t Spoil

Mark U. asks: Why doesn’t sugar ever seem to go bad?

sugarTwo foods are left out on the counter – fresh tomatoes and a bowl of sugar. Within a week or so, one will develop black spots and the other remains pristine, albeit perhaps a little clumpy depending on the humidity of the air. The reason? Osmosis.

While microorganisms love sugar, they also need a certain amount of water to thrive. This level of freely available water, called “water activity (aw),” for bacteria is about 0.91, for molds it is 0.8 and for fungi (yeasts), it must be at least 0.6. The aw of fresh foods is generally about 0.99, while crystalline sucrose (table sugar) is a paltry .06.

In its crystal form bone dry, sucrose (C12H22O11) loves to bind with water (H20). When present in sufficient concentrations, table sugar will suck all of the water around it. This is why sugar is an excellent food preservative. Via osmosis, the sugar pulls the available water from within the foodstuff, reducing the food’s aw, thus making it unsuitable for microbes to grow, or even survive.

More specifically, at the outer edge of a cell is its membrane, a semi-permeable barrier that allows some substances, including nutrients and wastes, to move in and out.  With a higher concentration of sugar outside the cell, the solution is hypertonic, meaning it will draw water from the cell, causing the bacteria (or whatever cell) to shrivel and die. (The reverse could potentially happen as well if the sugar concentration was higher inside the cell, hypotonic, with it drawing water in, perhaps to the point of bursting the cell.)

On a chemical level, it’s pretty interesting as well. Notice all the hydrogen and oxygen involved; between the two molecules, there are 24 hydrogen atoms and 12 oxygen. Each oxygen atom has a slight negative charge and each hydrogen atom has a slight positive charge, and in chemistry, opposites attract. Together, all of these hydrogen and oxygen atoms pull at each other – initially to form their respective molecules (table sugar or water), and then in the process that kills the microbe.

You can also observe this absorption effect simply by taking some cotton candy, which is made of pure spun sugar, and placing it in a humid environment. With just 33% relative humidity, cotton candy left out in the air will completely collapse and crystallize in just 3 days as it absorbs the moisture in the air. At 45% relative humidity, it will completely collapse in just one day. At 75% humidity, it takes just 1 hour. This is why it has only been since 1972 that non-“made on demand” cotton candy has been available. (1972 was when the first fully automated cotton candy machine was invented that could make the fluffy treat and quickly package it in water tight containers).

If you liked this article, you may also enjoy:

Bonus Facts:

  • While it may seem like cotton candy, which is made of pure sugar (sometimes with food coloring or other flavoring added), would be pretty much the worst thing in the world for you to eat,  it should be noted that it only takes about 30 grams of sugar to make a typical serving size of cotton candy, which is about 9 grams less than a 12 ounce can of Coke.  Further, cotton candy has no fat, no preservatives and is only about 115 calories per serving.  While certainly not a health food, nor filling in any way, there are numerous things people consume every day that are much worse for them health-wise.
  • Even when dissolved in small amounts of water, table sugar remains toxic to most microbes – think of jams and jellies, which have an aw of about 0.8 and so don’t (usually) spoil very easily. Of course, there are several microorganisms, called osmophilic, which can thrive in relatively low water activity environments. Two of these, Pediococcus halophilus, a bacteria, and Saccharomyces rouxii, a yeast, work together with a mold, Aspergillus sojae (or oryzae), to create shoyu, a fermented soy sauce.
  • We use other microbes in food production as well. Bacterial cultures form the basis of cheeses (such as Lactococcus lactis) and yogurt (e.g. Lactobacillus bulgaricus), as well as fermented sausages, like chorizo and pepperoni (e.g. Lactobacillus plantarum). Lactic acid bacteria are also used to help stabilize the malic acid in wines.
  • To make blue cheeses (think: Roquefort, Gorgonzola and Stilton, as well as Bleu), the molds Penicillum Roqueforti and P. Glaucum,are added. Note that although some molds can be toxic (when they produce aflatoxins and mycotoxins), the composition of cheese prevents this – thus rendering cheese mold generally safe to eat.
  • Yeasts (e.g. Saccharomyces cerevisiae and S. pastorianus) are used for fermentation, integral for making breads, spirits, wine and beer. These processes require both a fair bit of sugar and water to make, but because there is sufficient water, this doesn’t have the effect of killing off the microbes via dehydration as happens with pure, dry table sugar. More specifically, relying on the same process that dehydrated the microbe, the water and sucrose molecules seek each other out, but this time, in the presence of sufficient water, the bonds between individual sucrose molecules are broken – and thus each molecule is separated and surrounded by water molecules, making a sugary solution. At a mixture of 50% water to 50% sucrose, the solution has an aw of .927 – high enough for yeast, mold and bacteria to thrive off the abundant sugar source.
  • The cell membrane of a microbe like bacteria has small pores that are large enough to let small water molecules (with a molecular weight (MW) of 18) to pass through, but are too small for large sugar molecules (342 MW) to normally traverse. Thus, to get sugars like sucrose and glucose to pass through a cell membrane, rather than osmosis, sugars can enter a cell through a special channel. In this process, called facilitated diffusion, proteins on the membrane bind with the sugar, which opens a portal that allows the molecules to enter and exit; with facilitated diffusion, no energy is expended and the substance moves from the area of high concentration to that of low concentration. A similar process, although one that requires the expending of energy, active transport, moves substances from areas of low concentration to those of high concentration.
Expand for References

Does Canadian Beer Really Contain More Alcohol Than Beer Made in the United States?

Paul E. asks: Is it true that Canadian beer has a lot more alcohol in it than American beer?

canadian-beerCanadians boast longer lives, safer communities, free nationalized healthcare, a cleaner environment, the most gold medals in Olympic hockey, and, of course, poutine. But, contrary to popular belief, one thing they don’t do any different than their friends to the south is make stronger beer.

When you’re dealing with mainstream beers, those with the highest alcohol are generally stouts, porters and pale ales, with alcohol by volume (ABV) contents typically ranging between 4% and 10%, though most mainstream beers tend to stay in the range of 4%-6%, such as Canada’s popular Labatt (5% ABV), which edges out the United States’ “favorite” brew, Bud Light (4.2% ABV).

For a few more comparisons using ABV, we have the U.S.’s Busch (4.6%), Coors Original (5%), Old Milwaukee (5%), Bud Ice (5.5%), Keystone (4.4%), Keystone Ice (5.9%), and Budweiser (5%).  On the Canadian side, we have Carling Black Label (4.7%), Grizzly Canadian Lager (5.4%), Moosehead (5%), Labatt Ice (5.6%), O’Keefe Canadian (4.9%), and Molson Canadian (5%).

However, some Americans prefer their beer with a little extra kick, and United States brewers have delivered. For instance, the kind people at Dogfish Head make the 120 Minute IPA with an ABV of 20%, while the evil geniuses at Sam Adams have created Utopias, a brew that boasts a whopping 27% ABV.

Of course, Canadian brewers are no slouches and their breweries have produced some hefty quaffs, too, including Trafalgar’s Critical Mass Double/Imperial IPA with its 17% ABV, and the aptly named but apparently discontinued Korruptor (ABV 16%).

As you can see from this, both nations boast brewers that make beers with a variety of alcohol levels, but there really is very little difference between the nations’ respective brewers when you average them all out. This, perhaps, should not come as a surprise as most would like to be able to drink several beers while socializing or watching sporting events, rather than become completely hammered off just one or two beers. As a result, the sweet spot for this type of recreational drinking tends to be in that 4%-6% ABV range favored by brewers the world over.

At this point, you might be wondering where the myth that Canadian beers contained significantly more alcohol than beers made in the United States came from. And, indeed, the U.S. also has the reputation among other nations of having weak beers, not just in comparison to Canada, despite the alcohol levels in reality being pretty similar on the whole to every other beer drinking nation in the world.  So what gives?

It is generally thought that this comes from the fact that Canada (and most everywhere else) lists the alcohol levels in their beers by the aforementioned alcohol by volume (ABV). As with many metrics, the United States initially bucked the trend and went with alcohol by weight (ABW)- the weight of the alcohol in a drink divided by the total weight.

The key thing to note here is that alcohol is lighter than water (about 0.79 g/cc at standard pressure and temperature vs. 1.0 g/cc of water).  The result is that the ABW in beers is going to be equal to roughly 4/5 of the ABV.

To illustrate, if you have a typical 12 ounce bottle of beer that is listed at a 5% ABV, 5% of that 12 ounces in the bottle is going to be alcohol. On the other hand, take that same bottle, but now list it by ABW and because alcohol weighs about 4/5 of water, by weight, it’s then only going to be about 4% of the total weight of the beer in the bottle. It’s the same amount of alcohol in the bottle, but if you don’t pay attention to whether it’s ABV or ABW, one looks like less than the other.

With most beers in the United States classically listing their beer by ABW, instead of ABV, this ultimately led to people thinking beer from the United States had about 20% less alcohol on average than their international counterparts.  Today, of course, most brewers in the United States go with alcohol by volume, but the undeserved reputation for weaker beers has endured nonetheless.

If you liked this article, you might also enjoy:

Bonus Beer Facts:

  • The amount of alcohol in beer is determined by the amount of malted grain present at the beginning of fermentation. Any cereal grain (think oats, wheat, etc.) can be malted, although barley is by far the most common cereal used to make beer. (See: What Exactly is Malt?) Dried, heated, sprouted and baked, malted grain contains a natural enzyme, diastase, which, when water is added, turns the grain into sugar. Yeast is added, which digests the sugar and produces two “waste” products, carbon dioxide (making beer bubbly) and alcohol (making beer potent). Early in this process, brewers can make beer stronger by ensuring more malted grain, as measured by the amount of dissolved soluble sugars (called its original gravity or OG), is in the batch. The more OG, the stronger the ultimate product, but the longer it will take to ferment (turn into alcohol). Not as easy as it sounds, since only 80% of malt sugars will ferment, all beer contains residual sugars; with high gravity beers, this can be tricky to manage. If the extra sugars of a brew that is intended to be higher alcohol are not properly tended, the result can be too sweet, even syrupy. Luckily, many brewers know how to properly manage this extra sugar and produce delicious, higher alcohol (higher gravity) beers.
  • People have been making beer since the dawn of agriculture – for at least 10,000 years. The ancient Sumerians made beer, and given that it popped up around the same time people started making bread, some opine that it was invented when dough was forgotten in a mixing bowl, left out in a rainstorm, and then warmed by the reappearing sun – grain, yeast and water, when given enough time, should ferment. Whatever the truth of how they discovered it, it is known that the Sumerian brewing process began with baking bread, crumbling it and then allowing it to sit in crocks of water until it transformed into beer. For more on all this, see: A Brief History of Beer
  • According to the Academy of Nutrition and Dietetics, having one to two beers (or equivalent drinks of any alcohol) a day is associated with lower rates of heart disease, and may reduce the risk of developing kidney stones (perhaps due to its diuretic effect). Beer is also a source of soluble fiber (.75g- 1.3g in a 12 oz. bottle), and it’s a natural source of niacin, pantothenic acid, floate, B6, riboflavin, silicon and B12.
  • In Germany, Schorschbräu once produced a 57.5% ABV Elsbock, eponymously named Schorschbräu Schorschbock 57%, and still makes a 30.86% Elsbock, Schorschbräu Schorschbock 31%. And in Scotland, Brewmeister once produced a 65% ABV Elsbock called Armageddon, while BrewDog once made the Elsbock, The End of History (55% ABV) and still makes the American Double/Imperial Stout, Tactical Nuclear Penguin (ABV 32%) – a beer which seemingly was named after it was consumed.
Expand for References

Why is Mercury in Fish Such a Problem Today?

Drew F. asks: Why is there so much mercury in fish? Was this always a problem and we only just now know about it, or is it really a recent thing?

mercury-food-chainToxic to humans, to cause damage mercury must first get inside our bodies, either when we inhale it, get it into an open wound or eat it. While it is present naturally, human activities, including coal-fired power plants, have sent vast quantities of mercury into the air. More specifically, approximately half of the mercury in the air comes from natural sources such as volcanic eruptions, with the other half being a result of human activity. The bulk of this, about 65%, comes from stationary combustion sources- mostly coal-fired power plants.  The next greatest man-made source of atmospheric mercury is 18% from the processing of non-ferrous metals (most notably gold, making up more than half of that 18%), then about 6% from cement production, among other relatively minor sources.

Once the mercury is in the air, it ultimately falls into the oceans, lakes and rivers, then finds its way into fish, which are eaten by humans, with the mercury then stored in our fatty tissues (like those found in brain cells) with terrible consequences.

There are three primary forms of mercury; in ascending order of toxicity they are elemental (Hg), inorganic (HgII) and organic. Elemental mercury is the kind that is solid at room temperature (and looks silvery); this form evaporates when heated and is toxic when it enters the body (such as when it is inhaled).

The mercury found in power plant emissions is a combination of Hg and HgII, and this is what eventually is deposited into bodies of water. Once there, Hg and HgII are transformed (methylated) into the most dangerous type of mercury, organic, and specifically, methylmercury (MeHg). Until recently, we weren’t quite sure how the aquatic environment affected such a disastrous change.

Now we know that certain bacteria on the seafloor, including those that reduce sulfate and iron, are Hg methylators (meaning they turn less toxic mercury into the killer methylmercury). Researchers have recently identified a protein (hgcA) in some of these methanogens, like Desulfovibrio desulfuricans, that”take a methyl group from a folate compound and pass it to mercury,” thus, perhaps forming “key components of the mercury methylation pathway in bacteria.”

Regardless, one way or another, the mercury turns into MeHg, enters the food chain and bioaccumulates (i.e., levels continue to be gained faster than they are lost). At the lowest level of the chain, phytoplankton (tiny, single-celled algae) absorb MeHg from their environment before they are eaten by slightly larger zooplankton. At this stage, some MeHg is assimilated but the small animal is able to eliminate most of it with its waste products.

However, the zooplankton are eaten by small fish, and as the assimilation process repeats itself, more mercury is absorbed. The smaller fish are in turn eaten by ever larger fish, and at this level, the mercury is “highly assimilated and lost extremely slowly.” Thus, in long-lived fish at the top of the food chain like Bluefin and Ahi tuna, swordfish, walleye, marlin, king mackerel, orange roughy and shark, methylmercury levels, particularly in the fillets, can be very high. The National Resources Defense Council (NRDC) recommends these fish should be avoided.

Why? Mercury, and in particular MeHg, is a neurotoxin, interfering with both the brain and nervous system. Particularly detrimental to developing fetuses and small children, even at low doses mercury exposure in humans can cause delayed development in talking and walking, interfere with attention and cause learning disabilities. In fact, high doses of mercury prenatally or during infancy can lead to deafness, blindness, cerebral palsy and mental retardation.

Adults who are exposed to mercury may suffer tremors, vision loss, numbness in fingers and toes and even memory loss. Some evidence suggests mercury exposure may even lead to heart disease.

In recent years, the Environmental Protection Agency (EPA) has promulgated rules to help slash emissions of mercury and air toxics (MATS) from power plants. For instance, mercury emissions from coal burning plants are to be reduced by 90%, acid gas emissions by 88% and sulfur dioxide emissions by 41%. The agency estimates that once the MATS standards are fully implemented it will “prevent up to 11,000 premature deaths and provide $90 billion in health benefits annually.” The costs to oil- and coal- burning power plants to implement the standards are expected to reach about $9.6 billion each year.

If you liked this article, you might also enjoy:

Bonus Facts:

  • One day on the planet Mercury (i.e -the time it takes to rotate around its axis once) lasts 176 Earth days. A year on Mercury (i.e – the amount of time for Mercury to orbit the Sun once) is 87.97 Earth days. In that sense, it remains daytime for a full year on Mercury and it stays night for one full year also.
  • Fish are primarily white meat due to the fact that they don’t ever need their muscles to support themselves and thus need much less myoglobin or sometimes none at all in a few cases; they float, so their muscle usage is much less than, for instance, a 1000 pound cow who walks around a lot and must deal with gravity.  Typically, the only red meat you’ll find on a fish is around their fins and tail, which are used almost constantly.
  • The aforementioned potentially mercury methylating protein, hgcA, is present in at least one species of bacteria that lives in the human digestive tract.
  • In addition to avoiding fish with the highest mercury levels (listed above), the NRDC recommends the following: eat only 3 servings or less a month of bluefish, Chilean Sea Bass, Spanish mackerel, grouper and canned albacore and yellowfin tuna. Other fish that may be eaten a bit more frequently, but no more than 6 times a month, include Alaskan cod, Pacific croaker, halibut, freshwater perch, lobster, mahi mahi, carp, black and striped bass, monkfish, canned chunk light and skip jack tuna, jacksmelt, skate, sablefish and sea trout.
  • Seafood that may be freely eaten include catfish, clams, anchovies, crab, crawfish, Atlantic croaker, flounder, herring, hake, haddock, butterfish, North Atlantic mackerel, oysters, ocean perch, salmon, sardines, scallops, pollock, plaice, shrimp, American shad, Pacific sole, squid, tilapia, trout, whiting and whitefish.
Expand for References

Where Did Peanuts Come From?

James R. asks: Where did peanuts originally come from?

peanuts2The shell of a peanut (not actually a nut) is a pod, and, like other legumes, each pod may contain more than one seed. Although the cultivar common in the United States has two seeds, different peanut varieties will have anywhere from one to four. Nutritious and versatile, peanuts are a vital staple in the diets of people around the world.

Although today ubiquitous across the globe, the peanut (Arachis hypogaea) was native only to South America, and it is believed to come from the foothills of the Andes in Bolivia and Peru. Ancient, anthropologists have found evidence of peanut cultivation dating back at least 7,600 years.

Tasty and hardy, the plant quickly spread. It reached Mexico by the 1st century AD, and soon after European explorers reached the New World, sailors were transporting it to China and Africa, where it became popular by the 1500s.

Shortly after, as North America was being colonized, Africans in bondage, brought to work in plantations in Virginia as part of the slave trade, brought peanuts with them. By the early part of the 19th century, peanuts were being grown commercially for use not only as a food, but also for oil, and even as a substitute for cocoa. During the last part of the century, “hot roasted peanuts” were sold by P.T. Barnum as the circus criss-crossed the country, contributing even more to their popularity.

Innovations in the 20th century made growing and harvesting peanuts much easier, which by this point were being consumed as nuts, oil and peanut butter. Even more farmers began to grow peanuts at this time as a boll weevil infestation was ruining cotton crops.

Today, peanuts are cultivated around the world, with China and India producing the most. Hardy and easy to grow, peanuts pack a lot of nutrition into a small package. In fact, a quarter cup of peanuts provides significant amounts of vitamins B3, B1, E, biotin, folate, copper, manganese, molybdenum and phosphorous. It also provides nearly one-fifth of a person’s daily protein needs.

Building on this firm foundation, pediatricians working in Africa have developed a therapeutic food made from a combination of peanuts, oil, sugar, powdered milk, vitamins and minerals to use to combat severe acute malnutrition (SAM). Called high-energy RUTF (ready-to-use food) paste, the treatment is cost-effective, resistant to spoilage and is believed to have saved thousands of children over the past decade.

Indiscriminately feeding peanut butter to children may shock American parents who have been warned of the risk of food allergies, but it is consistent with recent research that indicates that this fear, and its concomitant late introduction of peanuts, may have contributed to the massive increases in peanut allergies seen in recent decades.

In 2000, the American Academy of Pediatrics recommended that children not be introduced to peanuts until age 2, although by 2008 it was becoming clear that these measures in no way were preventing the development of food allergies.

More recently, the American Academy of Allergy, Asthma & Immunology in January 2013 issued recommendations that counter the old guidelines, and encourage the introduction of peanut butter (along with fish and eggs) between the ages of 4 and 6 months, in order to help train the child’s body to accept the food.

peanutsNonetheless, peanut allergies remain a concern, and a recent study confirmed that, in particular, dry roasted peanuts cause the most severe allergic reactions. This is because:  “dry roasting causes a chemical modification of peanut proteins that appears to activate the immune system.”

In addition to allergies, when peanuts are improperly stored, certain types of mold will grow on them and produce aflatoxin, a type of carcinogen that has caused liver cancer in laboratory animals. However, a study by Consumers Union revealed that although the toxin was found in some butters – ironically, the highest in those butters fresh ground in health food stores – the large brands had the lowest amounts (Skippy, Jif and Peter Pan).

Furthermore, peanuts can also carry salmonella – a potentially deadly bacteria. During 2008-2009, a salmonella outbreak traced to tainted peanut butter was linked to 9 deaths and made hundreds ill across the country. During a subsequent investigation, it was revealed that the company at the center of the outbreak, The Peanut Corporation of America, knew the peanut butter was contaminated before it was shipped.

Six years later, a federal jury in Georgia convicted its owner, Stewart Parnell, of fraud, conspiracy and other charges. His brother, Michael, was also convicted of several charges, and the plant’s “quality control” manager, Mary Wilkerson, was convicted for obstruction of justice. This was the first time the chief executive officer of a corporation was tried before a jury for knowingly selling food tainted with the bacteria.

Luckily, most health experts believe that, in moderate amounts, peanut butter is safe to consume. Accordingly, peanut butter revenues in the United States have increased over the last few years, to reach $1.5 billion in 2013.

If you liked this article, you might also enjoy:

Bonus Fact:

  • The following aren’t technically nuts: almonds, Brazil nuts, Cashews, Walnuts, Coconuts, Macadamia nuts, Peanuts, Tom Cruise, and Pistachios, among others.
Expand for References

How are Baby Carrots Made?

Amy W. asks: How do they make baby carrots?

baby-carrotsUnlike cut baby carrots, farmers grow “true” baby carrots to be naturally small, or other times they are simply carrots harvested before they get a chance to completely mature. “True” baby carrots bear the same cone shape as a normal sized carrot while only being a fraction of the size. A number of farmers produce these type of baby carrots when they thin their crops during the growing season by removing a certain percentage of immature carrots. However, other farmers purposefully harvest entire crops of carrots young in order to sell the baby carrots with their green stems attached. That little detail allows growers to prove that their baby carrots are “true” baby carrots rather than carrots cut to look miniature.

The majority of baby carrots sold in supermarkets are not this type, but are manufactured to look a certain way. The baby carrots that we know today didn’t arrive in stores until 1989. Successful carrot farmer Make Yurosek operated a farm and processing plant in California, and he believed that there had to be a better way to use the 400 tons of carrot cull that his plant processed per day. Cull carrots couldn’t be sold in supermarkets because they had issues such as being broken or too misshapen. There were times when as much as 70% of the day’s processed crop ended up culled.

To solve the issue, in 1986, Yurosek experimented in creating smaller carrots from the cull, using an industrial potato peeler and a green bean slicer. When he sent them to a large west coast supermarket chain to see what they thought, he received a positive response. “I said, ‘I’m sending you some carrots to see what you think…Next day they called and said, ‘We only want those.’”

The process of manufacturing baby carrots has been refined and streamlined in the years since Yurosek first created them. At Grimmway Farm, one of the United States’ top carrot producers, full-size carrots are planted close together in an effort to have them grow straight and minimize the amount of carrot that will need to be trimmed later on in the process. Then, a mechanical harvester harvests the carrots and they are transported to the processing facility where workers use hoses and water to force the carrots out of the truck.

The carrots are then washed into the processing facility on luges. The luges, and the water, serve a dual purpose: washing off the loose dirt and sanitizing the carrots with trace amount of chlorine in the water. Next, the carrots are sorted before being cut into 2 inch sections. Grimmway Farm then stores the carrots for one to five days before peeling them and rounding the edges in a spiral slide made up of a grated surface. Finally the carrots are sorted one last time before being packaged up and shipped to stores. The extra carrot bits ground off during the peeling process either end up as cattle feed or become compost.

Concern arose over the past few years about the use of chlorine in the manufacturing of baby carrots. A deluge of chain-emails were sent about claiming that the baby carrots were soaked in a chlorine bath and that the white blush that appears on the surface is the chlorine coming to the surface. In response, Bolthouse Farms, the other top producer of carrots in the United States and largest producer of baby carrots worldwide, created a website  in order to refute the rumors.

The truth is that the process is regulated by the FDA and the amount of chlorine in the water is approximately 90% less than the chlorine level in regular tap water. That minute amount of chlorine sanitizes the carrots to prevent consumers from contracting foodborne illnesses such as E. coli and Listeria. Additionally, the white blush on carrots comes from dehydration, rather than chlorine seeping to the surface.   Soaking the carrots in water for a time will typically cause the bright orange color to return.

If you liked this article, you might also enjoy:

Expand for References

How Did Oktoberfest Start?

Michael R. asks: How did Oktoberfest get started?

oktoberfestAs we move past the summer and into the fall, we can count on certain things: the leaves changing color, the weather growing crisper, ghost stories being told, and the celebration of the Bavarian tradition of Oktoberfest. Even here in America, Oktoberfest is beloved as a time for dancing, dressing in lederhosen, eating sausages, and, of course, drinking beer.

But what are the origins of Oktoberfest? What are we joyously celebrating? And do you actually have to wear very unflattering lederhosen?

On October 12, 1810, the Bavarian Crown Prince Ludwig married Princess Therese von Sachsen-Hildburghausen in a grand ceremony in the Bavarian city of Munich. In order to involve the “commoner,” the couple and their royal parents organized a giant wedding party for the entire city of Munich outside the city gates upon a piece of cattle grazing land.

An estimated 40,000 Bavarian citizens showed up ready to party and have fun. And they certainly did. Feasting, drinking, dancing were among the major highlights. Horse races (no doubt with huge wagers on the line) closed the festivities, with the Prince and his new bride cheering the ponies on. The party was so memorable that the meadow outside of the city gates was named “Theresienwiese,” or “Therese’s fields,” in honor of the blushing bride. Today, that meadow is known as “Wies’n,” an abbreviation of the original name, and is still the site of the annual Munich Oktoberfest.

A year later the Bavarian royalty, fully aware that the celebration earned them tons of goodwill from their subjects, decided to do it again, this time with more horse races and an agricultural fair. And they celebrated again in 1812. And for the next two hundred years, save for the two dozen times Oktoberfest has been cancelled due to wars and cholera epidemics. So, formally, Oktoberfest is a celebration of the wedding between Prince Ludwig and Princess Therese, who would become King Ludwig and Queen Therese of Bavaria in 1825.

However, Oktoberfest-like events (though not on this scale) had been going on, without an official name, for many, many years before that around this time of year in the region, as you’ll soon see. More than celebrating a marriage (and, as will be pointed out later, this marriage wasn’t the most stable anyway), Oktoberfest is a commemoration of the glorious process of beer-making.

The earliest known (purposefully) concocted alcoholic beverage dates back about nine thousand years to China, with a treat made of rice, honey, and fermented fruit. The first barley beer can likely be traced to the Middle East and the Sumerians of ancient Mesopotamia of five thousand years ago. In 1992, archaeologists in the region uncovered ceramics with a beer-like residue on it that was estimated to be from 3500 BCE. One of the earliest pieces of human writing is the “Hymn of Ninkasi,” not only an ode to the Sumerian goddess of beer, but an actual recipe for ancient beer from thirty eight hundred years ago, in all likelihood the recipe is much older than that.

From there, the art of beer-making traveled around the globe (it is well-documented that the Egyptians and Romans enjoyed the beverage) with recipes and the types of beer varying with the region. In fact, beer was much more popular in places like England, Germany, and other European countries where it was nearly impossible to grow grapes and have vineyards due to the colder weather, rendering the other popular alcoholic beverage of the time – wine – virtually a non-entity. By the Middle Ages, hops was being added as a preservative and for taste, bringing beer closer to what we now know it as today.

For years, people made and drank top-fermented beers, a process where fermentation occurs at a range of 62 to 77 degrees Fahrenheit causing the yeast to rise to the surface. This created ale, while adding hops made it a beer. It is thought that bottom-fermentation, or as it is known today lagering, was discovered by accident in the 16th century when beer was stored in cool and dry caverns that were around 46 degrees (or below) Fahrenheit for long periods of time.

So, certain types of beer (or ales or lagers, if you want to get technical) can only be made under certain conditions, usually relating to temperature. Considering refrigeration is a relatively modern invention, this meant the kind of beer that could be made was typically reliant on the time of year and the weather.

Ales, stouts, porters, and wheat beers are all part of the top-fermented beer family and, therefore, were made when the temperatures were warmer. Lagers, pilsners, and bocks (as well as American malt liquor) are examples of bottom-fermenting beers and were made in the cold, winter months.

Now, back to Oktoberfest. If you are reading this article out of doors, you are probably noticing the air turning cooler on this late September day. Well, so did Bavarian beer-makers. As the summer turned into fall, they knew production was halting on their ales, stouts, etc., and beginning on the lagers, pilsners, and forth. Considering German autumns get cold pretty quick, late September/early October was a perfect time to drink the rest of the summer beer and try the first, fresh batch of winter beer. In other words, the wedding of the Prince and Princess was just a perfectly-timed excuse to get drunk with the abundance of beer available.

So, when you head out for Oktoberfest this year, remember that you are not only celebrating a wedding, but the perfect time to try all variations of beer and, of course, lederhosen.

If you liked this article, you might also enjoy:

Bonus Facts:

  • If it’s called Oktoberfest celebrating a wedding that occurred on October the 12th, than why does it start in September? Well, that’s due to the all-mighty tourism dollar. In the early 20th century, Germany realized that this festival was a major tourist attraction and lengthened the festival. It extends into September because it is better weather in Germany than mid-October.
  • October got its name from the Latin “octo”, meaning “eight”. If this seems odd to you, considering it’s the tenth month in the modern calendar (Gregorian), that used to not be the case. It was once the eighth month (in the Roman Calendar) and the name simply carried over. (See: The Evolution of the Modern Calendar)
  • As mentioned, Prince Ludwig and Princess Therese von Sachsen-Hildburghausen became King and Queen in 1825, but their royal marriage was far from perfect. It’s been well-document that King Ludwig, like so many monarchs before him, had many extramarital affairs, including with Lady Jane Digby (an English aristocrat and well-known “adventurer”) and Lola Montez, and Irish dancer, actress, and author. Later, she moved to the US and wrote a book entitled The Arts Of Beauty; Or, Secrets Of A Lady’s Toilet – With Hints To Gentlemen On The Art Of Fascinating. King Ludwig’s reputation, at first very popular with the people, took a major hit due to these liaisons and he was much less effective as a ruler. Therese knew of all the affairs, but considering that she liked being Queen, she didn’t interfere. In fact, it was said, she would routinely flee the region in order to avoid Ludwig’s mistresses.
  • Queen Therese had her own romantic past as well. She was one of several potential brides for Napoleon, emperor of France. Considering that in 1813 he went into exile, only three or four years after they were to be married, it was probably the better result that she married Ludwig.
  • The oldest known brewery still in operation today is the Benedictine Weihenstephan Breezeway in Bavaria, Germany.  It’s thought that it first opened up shop around 768 and by 1040 is known to have been officially licensed by the City of Freising for making beer. For more, see: A Brief History of Beer
  • Ever wonder why in America people say “fall” for the season and in much of the rest of the English speaking word they say “autumn?” The origin of “fall” as a name for a season is not perfectly clear, though it’s thought that it probably came from the idea of leaves falling from trees and many plants, particularly the contraction of the English saying “fall of the leaf.”  It first popped up as a name for a season in the later 16th century in England and became particularly popular during the 17th century, at which point it made its way over to North America. Of course, calling autumn “fall” in England has since passed out of widespread practice, but has survived as a common name for the season in North America. (For more see: Why We Call the Seasons: Spring, Summer, Autumn, and Winter)  This is not unlike how “soccer” was originally one of the most popular names for the sport in England around its inception and a long time after, which spread to North America, only to have the name relatively recently die out in England, leading many to believe “soccer” is an “American” name for the sport.
Expand for References

Who Invented Diet Soda?

Matthew C. asks: Who made the first diet pop?

diet-soda2In order to make a diet soda (at least one people would popularly drink), a sugar substitute was needed. The first such artificial sweetener, saccharin, was discovered by accident. In the late 19th century, Constantin Fahlberg, after a long day working at the lab of the famed chemist Ira Remsen in Baltimore, Maryland, was at home eating dinner when he picked up a roll and bit into it. The roll was incredibly sweet. Fahlberg continued with his meal, soon realizing it wasn’t just the roll that was sweet, it was everything his hand touched. He had, quite literally, brought his work home with him, with some compound from that day’s experiments on his hands. (Yes, the first non-toxic artificial sweetener was discovered because a scientist didn’t wash his hands after getting chemicals all over them- not unlike how the effects of LSD were discovered.)

According to his statement to the Baltimore Sun, when he returned to his lab, he “proceeded to taste the contents of every beaker and evaporating dish on the lab table. Luckily for me, none contained any corrosive or poisonous liquid.” Finally, he discovered what had been on his hands: a substance from an overheated beaker “in which o-sulfobenzoic acid had reacted with phosphorus (V) chloride and ammonia, producing benzoic sulfinide.” Fahlberg and Remsen jointly published a paper together describing the “saccharin synthesis” process, but neither of them initially really understood or knew the potential it had for commercial use.

Through the early 20th century, saccharin’s popularity as a sugar substitute grew. It was cheap, easy to make, and very sweet- approximately 200-700 times sweeter than sugar, ounce for ounce. Plus, at least according to initial tests, it had no adverse side effects. In fact, doctors started prescribing saccharin as a catch-all treatment, for things like headaches, nausea, etc.

However, it was not without its detractors. For instance, President Teddy Roosevelt had a row with the Department of Agriculture’s head chemist, Dr. Harvey Washington Wiley, over saccharin.  Wiley was staunchly against the substance, stating it was a “a coal tar product totally devoid of food value and extremely injurious to health.”

Roosevelt reportedly said of this, “Anyone who says saccharin is injurious to health is an idiot. Dr. Rixey (Roosevelt’s personal physician) gives it to me every day.”(Needless to say, Wiley soon lost much of his credibility and his job.)

Due to sugar rationing in both World War I and World War II, saccharin use increased and it became a very common ingredient in various product in both the United States and Europe.

However, by the 1950s, saccharin started declining in popularity. Research began indicating that large doses of saccharin led to bladder tumors and cancer in mice. Later, it was revealed that high PH levels found in mice, and not humans, reacted differently with saccharin than human body chemistry. Once the exact cause of the tumors was determined, exhaustive tests were done to see if the same thing was happening with primates. In the end, the results came up completely and overwhelmingly negative. (Thanks to this, in 2000, saccharin was removed from U.S. National Toxicology Program’s list of substances that might cause cancer. The next year, both the state of California and the U.S. Food and Drug Administration removed it from their list of cancer causing substances.  In 2010, the Environmental Protection Agency concurred, stating that “saccharin is no longer considered a potential hazard to human health.”)

But being a supposed cancer causing agent wasn’t the only issue with saccharin as a sugar substitute, it also left a metallic taste in people’s mouths, leading to other sugar substitutes being developed. People still had a sweet tooth, but didn’t want the calories and other potential issues with sugar.

This all brings us to the first diet soda, No-Cal.

In 1904, Hyman Kirsch opened his first soda store in the Williamsburg neighborhood of Brooklyn. An immigrant from Crimea, Kirsch thought his primarily Jewish neighborhood would be delighted by the fruit-flavored seltzer he used to make while still in the old country. He was right and Kirsch Beverages Inc. was born. (Incidentally, “Kirsch” is Yiddish and it loosely translates to “juices of black morello cherries.” So, naturally their signature flavor was black cherry.)

This regional soda sold well enough to provide Kirsch and his family a small fortune. He became a prominent member of his community and nearly fifty years later, helped found the Jewish Sanitarium for Chronic Disease in Brooklyn, New York. (It is still there today, now known as the Kingsbrook Jewish Medical Center).

What does this have to do with diet soda? While vice-president at this institution, Kirsch noticed that many of the patients at the Jewish Sanitarium were diabetic. Considering he was a soda man, he wanted to provide a sweet treat for these patients by creating a beverage that was sugar-free. However, he didn’t want to use saccharine to sweeten, for the reasons mentioned above. So, as explained in a 1953 New York Times article about him and his son Morris, the two

“got together in their own laboratories with Dr. S. S. Epstein, their research man, and explored the field of synthetic sweeteners. Saccharin and other chemical sweeteners left a metallic aftertaste. Then, from a commercial laboratory, they got cyclamate calcium, and No-Cal was accepted by the diabetic and those with cardiovascular illnesses who could not tolerate salts in the sanitarium.”

No-Cal, of course meaning “no calories,” was born.

At first, they only offered a mild ginger ale flavor and sold it at dietetic counters. Soon, they realized it wasn’t just diabetics who were buying the soda, but people who wanted a tasty carbonated beverage, but none of the calories that normally go with it. They initially diversified to two other flavors, root beer and their traditional black cherry, and then later added lime, cola, and chocolate flavors. They started marketing it to “weight-conscious” women, with ads featuring a woman attempting to zip up a skirt with the words, “Time to Switch to No-Cal. Absolutely Non-Fattening.” By the end of 1953, mere months after introducing the drink to diabetic patients, the soda was pacing for five million dollars per year in sales (about $42 million today).

Canada Dry was the next company to get involved in the diet soda craze. They put a no calorie ginger ale on the market called “Glamour” (also pretty obviously aimed at women) in 1954. Between No-Cal and Glamour, by 1957, over 120 million bottles of diet soda were being sold per year.

In 1958, Royal Crown Cola got into the game and introduced Diet Rite. As was the case initially with No-Cal, the cola was aimed at diabetics and was sold at medical supply stores. Three years later, in 1961, Diet-Rite appeared on Chicago grocery shelves and diet cola became the new fad there. One year later, Diet-Rite was being sold across the country. The biggest soda companies, Coca-Cola and Pepsi, rushed to develop their own diet colas – Coca-Cola’s Tab and Pepsi’s Patio Cola. By 1965, diet cola accounted for 15% of the entire soft drink market.

With the major players now in the game, Kirsch’s No-Cal rapidly faded to nothing in market share, but not before starting a revolution in carbonated beverage drinks.

If you liked this article, you might also enjoy:

Bonus Facts:

  • According to Tristan Donovan’s book Fizz: How Soda Shook Up the World, the name Tab was picked at random. After being warned by the company’s lawyers that naming a sugar-free, no calorie soda “Diet Coca-Cola” would somehow undermine the trademark, Coca-Cola executives programmed an IBM 1401 computer to randomly spit out three and four letter word combinations. After a list of 250,000 names, the company finally settled on Tab.
  • Saccharin should technically be referred to as, “anhydroorthosulphaminebenzoic acid.” Fahlberg picked something different for obvious reasons. The name chosen, saccharin, is derived from the word, “saccharine” meaning “of or resembling sugar.”  This ultimately derived from the Latin “saccharon,” meaning “sugar,” which itself ultimately derived from the Sanskrit “sarkara,” meaning “gravel, grit.”
  • The artificial sweetener in No-Cal was also discovered by accident.  This time, it was Michael Sveda of the University of Illinois who under normal circumstances might have done better to be careful about what he was ingesting.  Sveda claimed he was smoking while working on synthesizing anti-fever medicine when he set his cigarette down for a moment.  In the process, the cigarette came in contact with a substance on his lab bench.  When he put the cigarette back in his mouth, it tasted extremely sweet.  After doing a little investigation, he discovered the substance was calcium cyclamate, aka sodium cyclamate.
Expand for References
1 2 3 6