Category Archives: Answers

Do Fish Sleep?

clownfish
Ryan K. asks: Given that they probably need to move their fins constantly to stay in place, do fish ever sleep?

clownfishLess like sleep and more like suspended animation, most fish species do spend some time resting. And like us, if they don’t put in enough downtime, they try to make up for it later.

Sleep has a simple definition that includes closed eyelids and a particular brainwave pattern in the neocortex; as a result, it’s relatively easy for researchers to determine when a person, another mammal or a bird is asleep.

This definition is problematic, however, when it comes to fish, since they have neither a neocortex nor eyelids. As such, researchers have to rely on behavior observation to determine when they’re napping.

Typically when a fish is “sleeping,” four characteristics are observed: (1) inactivity for a long time; (2) a resting posture (like a droopy tail); (3) a routine (like resting at the same time, and in the same manner, each day); and (4) decreased sensitivity to its environment (hard to arouse).

Different species of fish snooze in different ways. For example, tilapia have been observed resting at the bottom of their habitats, while brown bullhead catfish were found napping at a 10-30 degree angle along the bottom, with their tails flat and fins stretched out. In addition, some species of bass and perch were seen sleeping on or under logs, while fish around a coral reef often hide in its crevices at night

In many fish species, during this period of rest, lower cardiac and respiratory rates are observed, as are decreased mouth, gill and eye movements. For several, like the bluehead and requiem shark, the decreased sensitivity to disturbance is so profound that researchers have been able to pick them up and haul them toward the water’s surface without stirring any reaction.

Interestingly, some fish species never seem to sleep. Both mackerel and bluefish swim constantly, and although they swim less at night than during the daytime, they remain responsive to stimuli 24-7. Another night-owl, the California horn shark, is far more active at night, although even during the day it remains responsive to disturbances. Others that never seem to sleep, but may actually catch 40 winks now and then, include many fish that live in large schools. One theory holds that while some members of the group have their eyes peeled, others are able to enter a kind of daydreaming restful state.

Other fish don’t give any signs of sleeping when they’re young, but develop a sleep pattern once they reach adulthood; still others that normally sleep at night, like the tautog, suspend any signs of sleeping during periods of migration and spawning.

Some chichlids and threespine sticklebacks will forego sleep while their eggs are incubating – not necessarily to protect them, but to fan them (thereby providing a continuous supply of oxygen to the brood).

Some species of sleep-loving fish will work to catch up on lost sleep. In a 2007 study, several researchers ruthlessly pestered a group of zebrafish by alternately tapping on the aquarium and even piping noise in through an underwater loudspeaker. Completely deprived of sleep during their normal 6 hour dark period, the next day, the researchers left the tank dark and observed that the zebrafish were significantly harder to arouse, and their normal mouth and gill movements had been cut in half.

Tapping on tanks seems to be a popular method with fish-sleep researchers, and in 2011, some NYU biologists used the technique to determine that cave fish don’t sleep much when compared with their surface-dwelling neighbors.

Observing four species of fish all indigenous to northeast Mexico, a surface-fish species, Astyanax mexicanu, as well as three cave-dwellers, Pachón, Tinaja and Molino, the scientists discovered that the surface fish slept about 4 times more than cave fish (800 minutes in 24 hours compared with 110-250).

Easily as seemingly sadistic (to those of us who love sleep) as the zebrafish researchers, the NYU biologists also experimented with depriving the Mexican fish of sleep – by moving their containers once every minute. Sure enough, the fish lost sleep, and like their zebrafish cousins, they caught up on it by sleeping more the next day.

Seeking to learn why the cave fish sleep less, the scientists with the 2011 study then bred cave fish with surface fish (to observe any inherited sleep characteristics). Remarkably, each hybrid inherited the cave fish’s need for less sleep, and as a result, the researchers concluded there is a genetic basis that regulates sleep, and the cave fish’s gene (requiring less sleep) was dominant.

If you liked this article, you might also enjoy:

Bonus Facts:

  • After a review of over 300 scientific publications, a panel of 18 scientists and researchers including representatives of the American Physiological Society, American Academy of Pediatrics, American Psychiatric Association and American Geriatrics Society, prepared a list of new sleep recommendations for the National Sleep Foundation. They concluded that newborns should optimally get 14-17 hours of sleep each day, infants 12-15, toddlers 11-14, preschool kids 10-13, grammar school kids 9-11, teenagers 8-10, adults (aged 18-64) 7-9, and older adults 7-8.
  • It has been estimated that in 2014, worldwide sales of sleep aids exceeded $58 billion, and by 2019, they will exceed $75 billion.
  • Total sales of pillows and mattresses in 2014 were nearly $30 billion, and they were expected to exceed that by 2019.
Expand for References

Duty Free

duty-free
Jeremy W. asks: Why do we call non-taxed items duty free? Why is this allowed? Is this really the case or are you supposed to pay taxes anyway on items to your home country?

duty-freeProviding shoppers with a chance to buy and transport goods across international boundaries without paying local and national taxes, duty-free shops are found in airports and other ports and stations around the world. A creation of the 20th century, duty-free shops mark a sharp departure from more than 2,000 years of nations generating revenue by taxing the trade in commodities and other goods.

The practice of imposing such taxes traces its origins to at least the ancient Greeks and Romans where duties were levied on a wide variety of imports and exports. As leaders through the ages recognized that this was an easy and efficient way to collect revenue, the custom remained popular even among the so-called barbarians who conquered the great early civilizations. By the Middle Ages, feudal lords were still imposing taxes on the goods that passed in and out of (and by) their lands, and out of this tradition arose both toll gates and customs houses. In fact, by the 1300s, the word costom and custume had taken the meaning of a rent, tribute or tax paid to a feudal lord or other local authority (such as a king’s representative).

The word duty, meaning something due or owed, dates to the end of the 13th century, while the denotation of duty as a tax was first recorded in the late 15th century.

Duty-free, as an adjective, dates back to the late 1600s, where it referred to a taxing authority agreeing to forego taking its usual fee.

The idea of exempting certain goods from import/export taxes as a matter of routine, however, is a relatively recent invention. Born out of the recognition that the dramatic increase in international, civilian travel (and in particular air travel) after World War II could produce significant revenue, specifically from tourism, one innovative Irishman conceived of the idea of placing tax-free shops in international airports.

In 1947, Brendan O’Regan convinced the Irish government to pass a law that made the transit area of Shannon Airport (where he served as Catering Comptroller) as technically not a part of Ireland, and, therefore, any purchases made there wouldn’t be subject to tax. Passed on March 18, 1947, the Customs-Free Airport Act made Shannon Airport the first duty-free port in the world.

Although initially Shannon Airport’s sales were limited to Irish linen and other locally-produced goods, Regan and company quickly realized that if they stocked other international goods and sold those duty-free as well, they would fly off of the shelves. Prominent brand names early-on included Dior, Chanel, Hummel, Minox and Waterford. So popular was the practice that within a decade, duty-free had also begun to be used as a noun to denote the goods purchased in the shop.

Other duty-free shops followed, including those established by the American company, Duty Free Shoppers (DFS) in Hong Kong in 1960 and in Hawaii in 1962 (the first duty-free shop in the United States).

Note that duties are not only imposed by an exporting country, but may be levied when you enter a country, as well. The United States imposes duties on a wide variety of purchases, although there are exemptions, depending on where you purchased or received the item, how long you were there, your residency status and the value of the goods.

For example, a US resident who has spent at least 48 hours abroad is usually able to bring back $800 worth of goods without tax, with the next $1,000 worth of goods taxed at a 3% rate, and anything more than that taxed according to a tariff schedule; if the US resident did not spend at least 48 hours in the foreign country, the base exemption amount reduces to only $200. On the other hand, for US residents, the exemption doubles for imports from US insular possessions (e.g., Guam). Note that the rules and exemption amounts are a bit different for non-residents visiting the US who bring goods with them.

Of course, there are exceptions to the exemption, and certain goods, like alcohol and tobacco, are treated differently, depending on the product and the country where it was purchased. For example, US residents returning from Europe with one liter or less of alcohol may bring that in duty-free.

It is important to remember that duty-free only applies to goods purchased for personal use, not for resale, and the taxes imposed on the latter (and any exemptions) will be governed by other laws, regulations and trade agreements such as the Generalize System of Preferences (GSP) and the North American Free Trade Agreement (NAFTA).

If you liked this article, you might also enjoy:

Expand for References

Why are Green Cards Called That?

green-card
Amar F. asks: Why are green cards called that when they aren’t green?

green-cardA Permanent Resident Card from the United States government allows immigrants to legally work, live, and study inside the country. Despite the name “Permanent Resident Card”, it expires after ten years. But those legal residents may apply for citizenship after five years. It is more commonly known by its shorter name, the “green card.” But how did it come to be called this? The answer lies in the history of immigrant registration in the United States and attempting to keep one step ahead of counterfeiters.

The Alien Registration Act of 1940 marked the first time that the United States government required all immigrants to be registered, allowing the government to know exactly who had immigrated to the country. Section 31 (a) of Title II specifically addressed the issue for immigrants who were not already documented when they entered the country.

It shall be the duty of every alien now or hereafter in the United States, who (1) is fourteen years of age or older, (2) has not been registered and fingerprinted under section 30, and (3) remains in the United States for thirty days or longer, to apply for registration and to be fingerprinted before the expiration of such thirty days. Whenever any alien attains his fourteenth birthday in the United States he shall, within thirty days thereafter, apply in person for registration and to be fingerprinted.

Immigrants filled out forms at their local post office, and that paperwork made its way to the federal government. The Immigration and Naturalization Services (INS) processed the forms before sending a receipt card to each immigrant. That card, known as the AR-3 form, was a white receipt that allowed immigrants to prove to the police, government, or anyone else that they had registered their immigrant status.

green-card1That process worked for a while, but the surge of immigrants searching for the American Dream after World War II caused a change in the system. It no longer made sense to have immigrants register at the post office. Instead, they registered and received the new Form I-151 at their port of entry into the country. This Form I-151, also known as the Alien Registration Receipt Card, was made out of a special pale green paper. As such, the card began being referred to simply as the “green card”.

But the green card did not stay green for long. Counterfeit green cards became a major problem in the United States, especially after the passage of the Internal Security Act of 1950. At this point, legal immigrants to the United States could exchange their AR-3 form for a Form I-151 and thus be legal permanent residents in the country. However, those without legal status could not make such and exchange. Essentially, the Alien Registration Act of 1940 did not distinguish between legal and illegal immigrants but the new green card did. Since immigrants were subject to deportation if they could not prove their legal status in the country, possessing a green card provided significant security. That naturally meant counterfeit green cards became a major problem for the INS.

As such, between the years 1952 and 1977, the green card underwent seventeen changes as the INS worked to stay one step ahead of counterfeiters. The Form I-151 became the Form I-551, the Resident Alien Card, in 1977. This version of the green card was the first not to be made of paper and the INS only allowed a single facility in Texas to produce the Resident Alien Cards in the name of making them perfectly uniform. It was also the first to have the immigrant’s fingerprint and signature on the card, along with no expiration date.

The INS changed the green card again in 1989 in response to complaints from immigrants’ employers. Employers argued that checking the validity of an immigrant’s resident status was difficult due to the numerous versions of the green card. So in 1989, the INS adopted a peach-colored Form I-551. Another change happened in 1997 when the INS again tried to stay ahead of counterfeiters by adding a unique document number to the card, which was now rebranded a “Permanent Resident Card”. In 2004, the Department of Homeland Security seal and a hologram were also added to the front of the card.

green-card2Coming full circle, while the name “green card” stuck around even though cards hadn’t been green for many decades, the new version of the Permanent Resident Card released in May of 2010 returned to the green color. These new cards released by the U.S. Citizenship and Immigration Services contain the latest, high-tech attempts to thwart counterfeiters. Security technologies include laser-engraved fingerprints, holographic images, and embedded data.

If you liked this article, you might also enjoy:

Bonus Fact:

  • One increasingly popular, though somewhat controversial, method for immigrants to qualify for a green card is called the EB-5 program. This program allows immigrants to invest $1 million in a U.S. project or program that will create at least 10 jobs in the country (not counting potential jobs created for the immigrants or their immediate families). Variations of the program allow foreigners to invest a lesser amount of $500,000 if the project meets certain criteria, such as creating those jobs in areas with high unemployment rates. Of course, the controversy here lies in this system favoring the rich.
Expand for References

Why Gnats Swarm

gnats
Gerry D. asks: Why do nats swarm in a ball in the air?

gnatsA common sight in the spring and summer, the seemingly unprofitable and pointless habit of gnats to hover in a cloud is, in fact, the single most productive thing they’ll ever do with their short lives.

Although there are a wide variety of non-biting, but eminently annoying, gnats and midges, their lifecycles are all pretty similar. Each begins as an egg, and, with some species, they lay thousands at a time.

When they hatch (after a period of no more than a week), each enters a larval stage (lasting anywhere from 10 days to 7 weeks), followed by a pupae stage that lasts another 3 to 20 days, and then each emerges as an adult. And this is when they swarm.

Fulfilling their biological imperative, gnats and midges (like many other insects) swarm in order to reproduce. When the moment is right, females take flight while secreting a sex hormone that attracts the males, who (like those from many species) can catch the scent even from a distance. The males seek out the female, swarming around her, trying to be “the guy,” and, in the process, forming into the well-known ball. Yes, that ball of gnats you accidentally walked through was simply a group of insects trying to get it on.

Shortly after mating (and subsequent egg-laying), the lovers die.

Experts note that many insect species swarm, including colony-living insects, and many believe this is to promote genetic mixing. Members of different colonies are able to time-up their swarming by following environmental cues such as temperature, daylight, humidity and wind speed (high winds can be bad for business).

In addition to continuation of the species, scientists have discovered that the health of many breeds of social insects depends upon their proximity to their packs. For example, Mormon crickets rely on their groups for safety, and in a 2005 experiment (where some individuals were separated from the group and set loose on their own) within two days, 50-60% of the loners were dead – compared with 0 deaths among those who remained with the pack.

Beyond survival, pack proximity also seems to play an important role in the development of some species. In a 2012 report, it was revealed that cockroaches, like misery, love company, and when left alone, they can suffer from a variety of maladies including delays in molting and difficulties with mating.

Cockroaches also enjoy communicating with each other, and they sometimes do it with their poo. Exuding chemicals called cuticular hydrocarbons, they often mix these with their feces to leave a “scent trail” for their compatriots to follow back to a food source [think about that the next time you see a cockroach on the kitchen counter]. These chemicals also help individuals identify fellow members of their bands, so they can distinguish their pack’s poopy trails from all of the others.

If you liked this article, you might also enjoy:

Bonus Facts:

  • One of the most spectacular swarmers is the mayfly. Hibernating in the mud of a riverbed for up to three years before hatching, mayflies are prolific, if short-lived. After reaching adulthood and emerging on the water’s surface, adult mayflies only have about three hours to mate. As you can imagine, this makes for a pretty frantic mayfly, and together with their large numbers, these frenzied insects create a truly dramatic (and to many, revolting) swarm. One was so large, in fact, that it appeared as a rainstorm on weather radar.
  • A group of cockroaches is known as an intrusion, while a pack of gnats are referred to as both cloud and horde. Obviously, hornets are a nest, while bees are a hive (and also a swarm and a grist). Flies (and ferrets) in a bunch are called a business, and locusts, a plague.
  • Other fun names for congregating animals include a shiver of sharks, a knot of toads and a bask of crocodiles. For birds, a group of woodpeckers is a descent, starlings a murmuration, owls a parliament, hawks a kettle and crows a murder. Wolves together move in a route, porcupines are a prickle, hyenas a cackle, hippos a bloat, foxes a skulk, buffalo an obstinacy, bears a sleuth, cats a pounce and kittens an intrigue (aww).
Expand for References

I Before E, Except After C

caffeine
Jeremy R. asks: Is it true that more words break the I before E rule than follow it? If so, how come this is taught at all?

caffeineIf you ever want to start a fight among a group of linguists and orthographers, bring up the grammar school rule: “I before E, except after C,” which has been around since at least the mid-19th century. You will likely begin the most sedate and erudite brawl you could ever hope to witness.

First, there are arguments over what exactly the rule should be. Some (like me) were taught what I’m calling the “neighbor [ei] rule”: “I before E, except after in C or words that say “ā” [ei], as in neighbor and weigh.”[1]

Others were given a variation, hereinafter called the “receive [i] rule“: “I before E except after C when the sound is “ee”” [i].

Although not perfect, it appears the latter version makes a better rule (if you’re going to have one), since it has fewer exceptions given that a smaller number of words are brought within its orbit in the first place.

Note that some words fit the first part of both rules:

ie: believe, collie, die and friend

cei: ceiling, deceive and receipt

After that, the list of compliant words (and exceptions) begins to deviate. Consider this list of words that do not violate the receive [i] rule, but do violate the neighbor [ei] rule:

ei: counterfeit, feisty, foreign, kaleidoscope, poltergeist, seismograph, surfeit and their

cie: ancient, deficient, glacier, proficient, society, science and sufficient

ie [ei]: gaiety

Of course, there are some exceptions that violate both rules as well, and these include:

ei: caffeine, leisure, protein, seize and weird[2]

cie: deficiencies and species

All of this leads to another argument: whether or not to have a rule at all.

Some, like Geoffrey K. Pullum (who ascribes to the receive [i] rule, although for him the phoneme is written [i:]), have characterized it as “a very helpful guide to one small point in the hideous mess that is English orthography.”

And others, like Mark Wainwright, have noted that because the “except after C” portion “covers the many derivatives of Latin capio [= “take”] . . . receive, deceit, inconceivable . . . [the] simple rule of thumb is necessary” and efficacious.

Of course, there are those who find the exceptions have swallowed the rule, rendering it useless, and these include the UK’s education department which, in 2009, advised teachers through a document titled, Support for Spelling that: “The I before e except after c rule is not worth teaching [since] it applies only to words . . . which . . . stand for a clear /ee/ sound and unless this is known, [many] words . . . look like exceptions. There are so few words where the ei spelling for the /ee/ sound follows the letter c that it is easier to learn the specific words.”

This point of view finds support in the claim, made on the BBC show QI, that there are 923 words that are spelled cie, and only about 40 or so that are spelled cei, and for those who follow the neighbor [ei] rule, the extreme number of exceptions has rendered the rule “dumb and useless.”

If you liked this article, you might also enjoy:

Expand for References

Endnotes

[1] For reference: a chart of IPA pronunciation symbols for both American and British English.

[2] In American pronunciation, the vowel sounds in leisure and weird are pronounced [i], although in British English they do not make the /ee/ (as in receive) sound, and so would not be exceptions to the rule.

Where the Word “Jumbo” Came From

jumbo
Jim R. asks: Is it true that the word jumbo came from the name of an elephant?

jumboThe word “jumbo” can roughly be understood to mean “a large specimen of its kind” and it’s often posited that the word entered the English language thanks to an elephant. While this is certainly a nice story, the truth is a little more complicated.

First things first, though etymologists are in agreement that the word “jumbo” in specific reference to something that is quite big was popularised by an African bull elephant called Jumbo, there is evidence that the word existed long before he was even born.

In the latter half of the 19th century, Jumbo the elephant was arguably one of the most famous animals on Earth. Throughout his life as a zoo attraction and later a circus performer, he is estimated to have been seen by several million people. While nobody is exactly sure when Jumbo was born, it’s believed that he was captured as a young calf in 1860 somewhere in East Africa after his mother was shot by hunters. In 1862, after travelling hundreds of miles on foot and by boat, Jumbo was sold to a large botanical garden in France called the Jardin des Plantes.

Jumbo’s life at the Jardin des Plantes was reportedly not so great; along with awful conditions, Jumbo was constantly overshadowed by the garden’s two other elephant attractions, Castor and Pollux (who are themselves famous for being eaten during the 1870 Siege of Paris). In 1865, Jumbo was sold to the London Zoo after years of negotiations spear-headed by the zoo’s Superintendent, Abraham Bartlett. Bartlett had supposedly been very annoyed to learn that the French had managed to get their hands on the first African elephant specimen to set foot in Europe; so much so that he eventually agreed to send the Jardin des Plantes the following list of animals in return for the calf:“A rhino, a jackal, two eagles, a pair of dingoes, a possum and a kangaroo”.

When the young elephant finally arrived in London, Bartlett immediately recognised that it had been poorly looked after and handed its care off to his most talented zoo-keeper, Matthew Scott. Scott, who was noted as having a knack for understanding animals, quickly bonded with the elephant and under his expert care, Jumbo eventually grew to become one of the largest elephants the world has ever known, standing at an impressive 12 feet tall in his prime (later advertised as over 13 ft. tall).

Jumbo’s massive size and gentle temperament eventually led to him being bought by legendary circus owner, P.T Barnum in 1882. The sale of Jumbo caused a considerable stir in England and the London Zoo was heavily criticised by the public for daring to sell him. You see, during his time at the London Zoo, Jumbo had become quite a celebrity and boasted such high-profile fans as Queen Victoria, who personally petitioned the zoo not to sell him. The zoo, however, were worried about Jumbo entering what is known as “musth“, a condition that can affect bull elephants resulting in an increased production of testosterone and unpredictable, violent behaviour. Jumbo’s sheer size meant that if he entered musth, he could have quite literally destroyed the entire zoo if he felt like it. He had proven before that he was more than capable of destroying the doors to his cage and the chains that were supposed to hold him in place; so the zoo decided that it simply wasn’t worth the risk.

Once he got to America, Jumbo’s arrival was milked for all it was worth by Barnum who had his image emblazoned on everything he could find. Jumbo proved to be one of the most popular attractions at Barnum’s zoo and he was billed by the entrepreneur as “The biggest elephant in the world“. Sadly, just 3 years later, Jumbo was killed after being struck by a train and dragged along for about 300 feet before the train came to a stop.  Reportedly, as he was taking his dying breaths, he used his trunk to hold the hand of his long-time trainer.

Ever the showman, expert marketer, and businessman, Barnum had Jumbo’s remains preserved and toured with them for many years.

Back to the topic at hand. While it’s true that Jumbo was a humongous specimen, even by elephant standards, and also that after his death his name came to be synonymous with anything large, the word jumbo existed long before he did. As early as 1823, the word “jumbo” was a slang term used to describe: a big, clumsy person, animal or thing.”

Unfortunately, we’re unable to trace the meaning back much further than that because it originated in slang speech, which is so often notoriously poorly documented. One origin theory posited by a slang dictionary published in 1823, with possibly the longest title of any dictionary in history (80 words), is that it perhaps derived from the phrase “mumbo-jumbo”.

Mumbo-jumbo, which many of you will likely understand as a synonym for something that is meaningless or intentionally convoluted and confusing, derives, according to the Oxford English Dictionary, from “the name of a grotesque idol said to have been worshipped by some tribes,” often claimed to have been called Maamajomboo. It’s believed the word was adopted by the English language during the 18th century and that it may have been used as a derogatory term to refer to the so-called “incomprehensible” languages used by Africans. How the singular word “jumbo” came to be derived from this phrase isn’t precisely known.

As for how Jumbo the elephant got his name in the first place, this is similarly not known. One theory is that Jumbo was given his name by Abraham Bartlett who reportedly just liked the sound of the word, evidenced by the fact that he once called a gorilla “mumbo”. Another theory is that it was actually Jumbo’s trainer and handler, Matthew Scott, who named him “Jumbo” after the slang term because of his clumsy gait and heavy-set frame.

If you liked this article, you might also enjoy:

Bonus Facts:

  • Despite his huge size, Jumbo was a remarkably calm elephant as long as his trainer was around and he was known to have given thousands of children rides on his back throughout his life.
  • Matthew Scott and Jumbo shared a very close relationship throughout their time together and Jumbo would reportedly throw tantrums whenever Scott went home. When Jumbo was bought by P.T Barnum, Scott was hired to be his keeper simply because he was the only one Jumbo would listen to. Abraham Bartlett was so terrified at the thought of Jumbo going on a rampage if something happened to Scott that he wrote a letter to the Zoological Society asking for them to provide him with a method of killing Jumbo if Scott was ever injured.  As mentioned, reportedly as Jumbo was taking his dying breaths, he used his trunk to hold the hand of Scott.
  • According to Matthew Scott, he would share a bottle of beer with Jumbo every night. Scott claimed one night when he drank the bottle himself and promptly fell asleep before giving Jumbo his share, Jumbo gently lifted Scott out of bed with his trunk and put him on the floor in front of him. When Scott woke up, he realised his mistake and promptly got Jumbo a fresh beer.
Expand for References

Time Before Ubiquitous Clocks

sundial
Anonymous asks: Through most of history there were few clocks and only recently alarm clocks, so how did people know when to get up precisely? Or how did they schedule meetings or when to open up shop or close, etc.? Basically, I guess I’m just wondering how people kept track of time in order to go about managing daily life?

sundialAs much as we hate the alarm that drags us from sleep to face the day, it is hard to imagine how people organized themselves and their collective activities before the invention and widespread use of mechanical or digital clocks. Clever and adaptable, we humans actually seem to have managed rather easily by relying on simple methods, some of which we still see in our timekeeping today.

Universally, human timekeeping has always been related to the Sun and its movement across the sky. Ancient cultures, like the Babylonians, Chinese, Egyptians and Hindus, even from the earliest days of civilization were dividing the Sun’s cycle into periods.

Of course, one of the drawbacks to this early way of keeping time was that, depending on the season, the length of each period could vary quite a bit. Another drawback was that at night the Sun was most unhelpfully missing from the sky, but Egyptians, like us, still needed to measure time.  After all, how else would they know when the bars closed?  To get around this problem, their astronomers observed a set of 36 stars, 18 of which they used to mark the passage of time after the Sun was down.  Six of them would be used to mark the 3 hours of twilight on either side of the night and twelve then would be used to divide up the darkness into 12 equal parts.  Later on, somewhere between 1550 and 1070 BC, this system was simplified to just use a set of 24 stars, of which 12 were used to mark the passage of time.

The Babylonians used a similar system, and also had seasonally-adjusted hours, so that the Babylonian hour comprised 60 minutes only on the spring and fall equinoxes. Sixty was important to the Babylonians, who inherited a base 60 calculation system from the Sumerians; ingenuous, 60 is a convenient number for doing math without a calculator since it is evenly divisible by each of the numbers 1 through 6, among others, and, most relevant to timekeeping, 12.

Rather than using variable length hours, Greek astronomers in the 2nd century BC began using equal length hours in order to simplify the calculations when devising their theories and in experiments, although the practice did not become widespread until after the introduction of mechanical clocks; as such, regular folks continued to rely on seasonally-adjusted hours well until the middle ages.

The impetus to develop mechanical clocks in Europe first arose among monks who needed accurate timekeeping in order to properly observe daily prayer, as well as maintain their rigid work schedules. The first recorded mechanical clock in medieval Europe was constructed in 996 in Magdeburg, Germany. By the 14th century, large mechanical clocks were being installed in churches across Europe, and the oldest surviving example, at Salisbury cathedral, dates to 1386.

Innovation led to smaller clock parts, and the 15th century saw the appearance of domestic clocks, while personal timepieces were seen by the 16th. Note that, even well into the Renaissance, clocks did not display minutes, and the idea that an hour was divided into 60 of them was not well known until nearly the 17th century.

So how did people keep appointments? One early method, practiced especially around the Equator, was to point at the place in the sky where the sun would be when you wanted to meet.

A more common practice, particularly in the middle latitudes, was to rely on a sundial; all types proliferated and included everything from a simple stick shoved into the ground to the shadows that fell from landmarks (such as Egypt’s obelisks) to formally crafted devices. And, of course, advanced civilizations had a variety of other timekeepers as well, including water clocks and hourglasses, going all the way back to at least 1400-1500 BC.

Of course, these methods were far less efficacious in extreme northern (or southern) latitudes. To accommodate their mercurial Sun, the Scandinavians invented daymarks – a system of dividing the horizon into eight sections, one each for north (midnight), south (midday), east (rise-measure), west (mid-evening), northeast (ótta), southeast (day-measure), southwest (undorn) and northwest (night-measure). The time of day was known by noting over which of these daymarks the Sun stood at that moment.

Regardless of the method of knowing the hour, our ancestors also had to come up with ways to get up on time. One simple technique relied on a full bladder and was accomplished by the simple expedient of drinking a lot of liquid at bedtime. Yet another time-tested simple method, at least in rural settings, was keeping a rooster close at hand (see: Why Do Roosters Crow?)

On the other hand, some approaches were dependent on the kindness of others. In communities served by a sizable religious institution, residents could often rely on the ringing of church bells or the call to prayer. Likewise, when factories were first introduced in the 18th century, workers could depend on the factory whistle to get them where they needed to be on time.

Later, as people moved further away from their employers, some paid Knockers-Up, early-risers who carried long sticks, to tap on their doors and windows at the appointed time.

Remarkably, today’s ubiquitous, bedside, adjustable alarm clock did not become popular, at least in the United States, until the 1870s.

If you liked this article, you might also enjoy:

Bonus Fact:

  • Have you ever wondered what a.m. and p.m. stand for?  Well, wonder no more: a.m. stands for “ante meridiem,” which is Latin for “before midday”; p.m. stands for “post meridiem,” which is Latin for “after midday.”
Expand for References

How Do They Make White Gold White Given That It’s an Element?

Wedding-rings
Elizabeth F. asks: Is white gold really gold? If it is, how do they make it white when it’s an element?

Wedding-ringsThe purest form of gold is, of course, golden and is referred to as 24 karat gold. Pure gold is much too soft for use in jewelry and can even be dented by simply pressing your fingernail hard against it. Needless to say, daily wear, particularly for things like rings and bracelets, would see such jewelry bent and deformed quite quickly. So the gold must be made more durable by mixing it with another kind of metal or metals, creating a gold alloy.

As far as terminology goes, the 24 karats that make up pure gold translate to all twenty-four parts being gold. So an 18 karat gold ring is constructed of 18 parts pure gold and 6 parts something else, adding up to a total of 24 (75% gold, 25% other). The same formula can be applied to any karat of gold jewelry, such as a 14 karat gold pendant- made up of 14 parts gold and 10 parts other metals.

So what are these other metals? If the desired result for a particular piece of jewelry is still a golden color, common metals mixed with gold include copper and zinc.

With white gold, the jeweler typically uses metals like silver, palladium, manganese, and nickel, with nickel for a time being the main bleaching agent due to its cheapness. However, nickel has fallen out of favor in some jewelry circles because it more commonly causes allergic reactions.

All that said, while the resulting piece in these cases will be bleached to more of a silvery color, it doesn’t typically produce the vibrant silver hue commonly associated with white gold today. (Although, there are methods to achieve this that have been very recently developed.) But for the vast majority of white gold out there, the vibrant silver color is created by coating the white gold alloy with a thin layer of rhodium, a metal in the platinum family.

The choice of rhodium comes from its bright white color along with its extreme durability. Eventually, though, it will wear down, at which point it will reveal the yellow tint of the white gold beneath. Depending on the exact makeup of the white gold, this may be barely noticeable to extremely apparent.  With unscrupulous jewelers, they may even simply use regular yellow gold alloys plated with rhodium in their “white gold” as a way to save a little money in production. The buyers would have no idea until the rhodium wore off, which takes a while.

Whatever the case, if you notice a yellow tint after a while, simply having the white gold cleaned then having a jeweler apply a new coating of rhodium returns the metal to its previous silvery shine, and usually is relatively inexpensive to have done. In some cases, jewelers even offer this service free if you originally purchased the item in question from them.

If you liked this article, you might also enjoy:

Bonus Facts:

  • While yellow gold and white gold tend to be the most common in jewelry, other colors can be made by adding different metals to the gold. For instance, pink and rose gold is made by adding a higher ratio of copper, with the amount of copper added determining the hue. Green gold can be created by adding silver, copper, and zinc alloys. Even a black color can be achieved by coating white gold with a black rhodium.
  • Platinum jewelry contains a greater percentage of platinum to alloys than the ratio of traditional gold to alloys in white gold, often containing 90% to 95% platinum with the remaining percentage of material generally being iridium, ruthenium, or cobalt.
Expand for References

The Invention of the Cardboard Box

carboard-box
Gared O. asks: Who invented the cardboard box?

carboard-boxThe cardboard box goes largely unappreciated. Yet, it is indispensable to our daily living. It holds all of our knick-knacks and personal mementos when we move or have things shipped. It holds our breakfast cereal. It has been used for countless children’s art projects; fashioned into a robot head or a horse’s body. Heck, it is even in the International Toy Hall of Fame in Rochester, New York. As with a lot of things that have become commonplace, hardly any thought has been put into how and why it is was invented and by whom. In fact, the history of the cardboard box, besides rarely being talked about, isn’t particularly well documented either. However, cobbled together through several sources,  patents, and old forgotten texts, we can start to piece together the story of the ubiquitous cardboard box.

It seems the beginnings of cardboard dates back to China, about three or four thousand years ago. During the first and second century B.C., the Chinese of the Han Dynasty would use sheets of treated Mulberry tree bark (the name used for many trees in the genus Moras) to wrap and preserve foods. This fact is unsurprising considering the Chinese are credited with the invention of paper during the Han Dynasty, perhaps even around the same time (the earliest paper ever discovered was an inscription of a map found at Fangmatan in the Gansu province).

Paper, printing, and cardboard slowly made its way west thanks to the silk road and trade among the empires of Europe and China. While cardboard likely ended up in Europe much earlier than the 17th century, the first mention of it comes from a printing manual entitled Mechanick Exercises, which was written by Theodore Low De Vinne (well-known scholarly author of typography) and Joseph Mixon (a printer of math books and maps, while also believing, rather bizarrely, that the Arctic was devoid of ice because there was sunlight there 24 hours a day). In the manual, it reads:

Scabbord is an old spelling of scabbard or scale-board, which was once a thin strip or scale of sawed wood…. The scabbards mentioned in printers’ grammars of the last century were of cardboard or millboard.

Through this description, it is inferred that cardboard was used as printing material and to be written on, rather than in box form and for storage.

The first documented instance a cardboard box being used was in 1817 for a German board game called “The Game of Besieging,” a popular war strategy game. Some point to an English industrialist named Malcolm Thornhill being the first to make a single-sheet cardboard box, but there is scant evidence of who he was or what he stored in the cardboard box. It would be another forty years before another innovation rocked the cardboard world.

In 1856, Edward Allen and Edward Healey were in the business of selling tall hats. They wanted a material that could act as a linear and keep the shape of the hat, while providing warmth and give. So, they invented corrugated (or pleated) paper. Corrugated paper is a material typically made with unbleached wood fibers with a fluted sheet attached to one or two linear boards. They apparently patented it in England that same year, though English patents from prior 1890 are notoriously difficult to find and most have yet to be digitized, so we weren’t able to read over the patent as we normally would while researching.

Who knows if Albert Jones of New York ever encountered an Allen/Healey tall English hat, but the next fold in the cardboard story belongs to Mr. Jones. In December of 1871, Albert Jones was awarded a patent in the United States for “improvement in paper for packing.” In the patent, he describes a new way of packing that provides easier transportation and prevents breakage of bottles and vials. Says the patent,

The object of this invention is to provide means for securely packing vials and bottles with a single thickness of the packing material between the surface of the article packed; and it consists in paper, card-board, or other suitable material, which is corrugated, crimped, or bossed, so as to present an elastic surface… a protection to the vial, and more effective to prevent breaking than many thicknesses of the same material would be if in a smooth state like ordinary packing-paper.

The patent goes on to make clear that this new packing method isn’t just relegated to vials and bottles, pointing out it could be used for other items, as well as not limited “to any particular material or substance, as there are many substances besides paper or pasteboard which can be corrugated for this purpose.”

A few years after this, the cardboard box that we know and love finally, quite literally, took shape. The Scottish-born Robert Gair owned a paper bag factory in Brooklyn. In 1879, a pressman at his factory didn’t see that the press rule was too high and it reportedly cut through thousands of small seed bags, instead of creasing them, ruining them all before production was stopped and the problem fixed.

Gair looked at this and realized if sharp cutting blades were set a tad higher than creasing blades, they could crease and cut in the same step on the press. While this may seem like an obvious thing, it’s not something any package maker had thought of before. Switching to cardboard, instead of paper, this would revolutionize the making of foldable cardboard boxes. You see, in the old way, to make a single sheet folding box, box makers would first score the sheets using a press, then make the necessary cuts with a guillotine knife by hand.  Needless to say, this made mass producing foldable boxes prohibitively expensive.

In Gair’s new process, he simply made dies for his press such that the cutting and creasing were accomplished all in one step. With this modification, he was able to cut about 750 sheets in an hour on his press, producing about the same amount in two and a half hours on one single press as his entire factory used to be capable of producing in a day.

biscuitAt first, Gair’s mass-produced foldable boxes were mostly used for small items, like tea, tobacco, toothpaste, and cosmetics. In fact, some of Gair’s first clients were the Great Atlantic & Pacific Tea Company, Colgate, Ponds, and tobacco manufacturer P. Lorillard. However, in 1896, Gair got his biggest client yet for his pre-cut, pre-creased cardboard box – the National Biscuit Company, or Nabisco, with a two million unit order. With this leap in product packaging, now customers could purchase pre-portioned crackers in a wax-paper lined box that kept the crackers fresh and unbroken. Before this, when buying these crackers, they’d have a store clerk get them from a less moisture and vermin controlled cracker barrel.

From here, sales of such boxes exploded and by the turn of the century, the cardboard box was here to stay. So next time you are loading your closet with cardboard boxes full of old clothes, buying something off of Amazon, or just opening a box of saltine crackers, you can thank a German board game for first commercially using a cardboard box and one of Robert Gair’s employees slipping up, inspiring a small but momentous tweak that made mass-produced, foldable cardboard boxes possible.

If you liked this article, you might also enjoy:

Bonus Fact:

  • Legend has it that Robert Gair’s son, George, named the biscuits that Nabisco were putting in Gair’s cardboard boxes. According to the book Cartons, Crates and Corrugated Board, by Diana Twede, Susan E.M. Selke, Donatien-Pascal Kamdem, and David Shires, Gair’s son told the executives that the biscuits “need a name.” This, supposedly, inspired them to call them “Uneeda Biscuits.”
Expand for References

Why Do Judges Wear Robes?

judge
Juana R. asks: Why do judges wear robes? Is this still a requirement or just a tradition?

judgeMost of us in the western world expect judges to wear a robe when they sit behind their bench in a courtroom, and they usually do not disappoint. But we rarely think about how the long, usually black, robe became the standard outfit for the men and women who preside over criminal and civil cases in the courtroom. The tradition began about seven hundred years ago in England.

Robes became the standard uniform for judges in England during the reign of Edward II, who ruled from 1327 until 1377. At this point, they had already been the standard garb for academics for over a century (see: Why Graduates Wear Caps and Gowns), as well as worn in other settings.  For instance, at this point, this type of garb would also have been appropriate for wear for a visit to the royal court, so a judge wearing his robes outside of the courtroom would not have been out of place.

The standard robes for judges at this point came in three colors: violet for the summer, green for the winter, and scarlet for special occasions. Judges often received the material for these robes as part of a grant from the King. The last mention of the green robes occurs in 1534, and new guidelines dictating which robes could be worn at certain times appear in 1635. The new guide suggested judges wear black robes with a fur trim during the winter and violet or scarlet robes that feature pink taffeta for the summer.

Historians believe that the transition to only black robes may have begun in the second half of the 17th century in England. But it is not known for sure what exactly caused the switch, though a popular theory ties the black robes to the mourning period after the death of a monarch.  Some historians claim that the funeral of Queen Mary in 1694 helped cement the already worn black robes as the typical attire while others point to the death of Charles II in 1685 as the start of that tradition.

Whatever the case, additional guidelines instructing judges to wear black robes appeared in the middle of the 18th century. At that point, English judges typically wore a scarlet robe with a black scarf and a scarlet hood when presiding over criminal cases. But for civil cases, they often wore black silk robes.

When the judges in the American colonies presided over legal proceedings, whether civil or criminal cases, they carried over the English tradition of wearing robes. This topic produced debate between Thomas Jefferson and John Adams after the colonists won the American Revolution and formed their own government. Jefferson argued that American judges should distance themselves from the traditions set down by the English and wear only a suit in court. Adams, a lawyer, disagreed and wanted judges to continue wearing the robes and wigs of English judges. A compromise ensued, with it being decided that the new American judges should wear the robe and not the wig.

Judges in the United States continue to wear robes in the courtroom, despite the lack of a rule requiring them to be worn. Even in the Supreme Court of the United States, there is no requirement that its justices wear a robe in court. Yet due to the tradition, and perhaps it being something distinguishing from everyone else in the courtroom, a mark of authority, judges continue to wear them. Judges have been wearing robes for over seven hundred years, after all.  Former Associate Justice of the Supreme Court Sandra Day O’Connor admits that wearing the robe may simply be a matter of tradition, but she likes what it symbolizes. “It shows that all of us judges are engaged in upholding the Constitution and the rule of law… We have a common responsibility.”

That is not to say that judges always wear their robes or stick to the traditional black robe. Judge ShawnDya L. Simpson of Manhattan, New York has admitted to forgoing the robe altogether in favor of a lime green suit on occasion.  Even when she wears her robe, she does not always fasten all the buttons on it. She also occasionally accents her robe by wearing a scarf or a necklace. Justice Bruce Allen of the New York State Supreme Court usually leaves off his robe while sitting at the bench. He generally only wears it when there is a jury present in the courtroom.

If you liked this article, you might also enjoy:

Bonus Facts:

  • If you’re curious, beneath their robe, judges most often wear formal clothes such as button-up shirts with neckties, blouses, and slacks. That said, it is not totally unheard of for them to wear less formal clothing, such as golf shirts, underneath their robes in the warmer summer months.
  • The first robes worn by the U.S. Supreme Court in 1792, called the “robes of justice”, were black with red and white trim on both the front and the sleeves.
Expand for References

Why was Lead Added to Paint?

paint-buckets
Julie H. asks: Why did lead used to be added to paint?

paint-bucketsThe use of lead in everyday objects dates back to the ancient Romans. They used lead in makeup, as an additive in food and wine, in pewter dinnerware, and many other items, including paint. The inexpensive metal was even used in the pipes that transported water throughout the Roman Empire. Lead continued to be used throughout history and into modern day. So it was no surprise that no one thought twice about adding lead to paint. But why did manufacturers add the heavy metal to paint? What purpose did it serve?

Different lead compounds are added to the paint as a pigment, creating a specific color depending on whichever compound is used. For example, lead (II) carbonate, known as white lead, makes the paint a white or cream color and the use of lead tetroxide makes a bright red paint.

The heavy metal additive also decreases the amount of time that the paint takes to dry, makes the paint more durable, and causes the paint to be more moisture resistant. This made lead-based paint ideal for use in homes, on metal exposed to the elements, and even children’s toys.

Of course, despite the durability of lead-based paints, they still have that little flaw of, well, containing lead. Exposure to the heavy metal can cause numerous health problems. High levels of lead present in the body can lead to convulsions, coma, and death, while lower levels of the metal can have harmful effects on the nervous system, brain, blood cells, and kidneys. Infants and children are most the susceptible to lead poisoning and may suffer from physical and mental development, behavior problems, and lower IQ levels.

Due to all this, the U.S. government banned the use of most leaded paints in 1978. However, it can still be used in houses, but the government requires that the lead content be less than 90 parts per million. As such, today the major source of exposure to lead paint and lead dust in the United States continues to be homes built before 1978. Lead-based paint was popular before that time, and thus was used in the construction of numerous homes. The EPA does note that lead paint that appears intact is not necessarily harmful. It can only produce dangerous lead dust if the paint has been damaged, such as if it is peeling, being rubbed, or during renovations.

If you liked this article, you might also enjoy:

Bonus Facts:

  • Despite the regulations on the use of lead paint in the United States, Mattel’s recall of toys containing lead paint in 2007 showed that not all countries enforce the same standards. Some Chinese manufacturers of children’s toys admitted to using lead-based paints, claiming that they knew other companies that did the same. They also stated that sometimes the lead-based paint was cheaper to purchase, easier to apply, and had a richer color than other paints.
  • Paint manufacturer Sherwin-Williams Co. may have known about the dangers of lead-based paint as early as 1900. A company memo from 1900 noted that “any paint is poisonous in proportion to the percentage of lead contained in it.” Despite that acknowledgement inside the company, Sherwin-Williams continued to advertise lead as a necessary ingredient in paint.
Expand for References

Does Drinking Gasoline Cause You to Go Blind?

drinking-gasoline
Matthew B. asks: Is it true that drinking gasoline will make you go blind?

drinking-gasolineProbably not, although anecdotal evidence shows it has a strong correlation with stupidity. What it may do is cause vomiting, vertigo, confusion, drowsiness, breathing difficulties, burning in the esophagus, sore throat, weakness, and diarrhea, even when ingested in small amounts; in larger portions, it can cause loss of consciousness, internal hemorrhaging, convulsions and death (from circulatory failure and/or damage to vital organs).

To address the question at hand a little more specifically, although vision loss is a potential symptom of gasoline poisoning, complete vision loss doesn’t appear to be a common consequence for those who ingest it, unlike many of the aforementioned symptoms. Although, you will experience complete vision loss when you die. So, in that sense- yes, drinking gasoline can cause you to go permanently blind.

And just for reference, it doesn’t take much swallowed gasoline to kill you- just half an ounce can cause severe intoxication for adults and even death for small children. A 12 ounce drink of gasoline will often be fatal to most humans.

Despite these dangers, however, a brave few have overcome their (rational) fear of gasoline’s toxicity and, instead, developed a powerful addiction to petrol.

In 2011, it was reported that 71-year-old Chen Dejun, of Chongqing, China, has been drinking gasoline for over 40 years. According to one report, in 1969 when Chen picked up his habit (based on a folk remedy), kerosene was the elixir of choice; adapting to the times, he later switched to gasoline, and in 2011, he was supposedly imbibing nearly one gallon each month (about 4 ounces per day).

A couple of years earlier, it was reported that a 14-year-old boy from the Sichuan province in China was struggling with a 5-year habit. Born from a desire to turn into a Transformer, the boy’s addiction began with drinking the butane out of cigarette lighters, but soon progressed into siphoning gasoline from the family’s motorcycle and sipping on that. Although the boy’s parents state that his habit is up to nearly 3 quarts a day, experts highly doubt that claim. Regardless of the amount, the addiction has had a devastating effect on the child who, before the habit, was “a very smart boy,” but now “he doesn’t [even] know seven plus 17.”

In 2012, one episode of TLC’s My Strange Addiction told the story of 20-year-old Shannon who drank up to two ounces each day. Describing the flavor as “sweet and sour, like a tangy sauce,” Shannon also noted that although it burns the back of her throat, it also “makes me feel good.”

More recently, a 45-year old UK man, Brian Taylor, was arrested for drinking petrol despite the fact that he was under an anti-social behavior order (ASBO) prohibiting him from going near gasoline pumps. Apparently, his first gasoline-related run-in with the authorities happened in 2005, when he was found to have stolen fuel 51 times, to drink, after which, “bug-eyed and ruddy faced,” he would dance “a little jig while high on the fumes.”

Of course, not everyone who drinks gasoline does it on purpose, although that doesn’t mean they don’t pay a heavy price. In February 2012, 43-year-old, Gary Allen Banning of Havelock, North Carolina, accidentally drank a bit of gasoline when he mistook a jar of liquid that was sitting out on his friend’s counter for a palatable beverage. Wisely, he immediately spit it out before much was ingested. Unwisely, sometime later that evening, he went outside, stuck a cigarette in his mouth and attempted to light it, at which point he burst into flames. You see, he hadn’t changed his shirt and had spilled the gasoline all over it after his swig. Medics rushed him to the hospital and later to the UNC Burn Center, but due to the severity of the burns, he died early the next day.

If you liked this article, you might also enjoy:

Bonus Schadenfreude:

  • For his valiant effort to protect humanity’s collective gene pool by sacrificing his life, Gary Allen Banning was nominated for a Darwin Award in 2012.
  • Methanol, like gasoline, is also highly toxic, as one of Gary’s fellow nominees in 2012 discovered. Employed as a truck washer for a Canadian liquor store chain, the nominee (hereinafter, Vodka Bob) had developed a bad habit of stealing the hooch hidden by truck drivers in the cabs of their vehicles. On the fateful day, he discovered a vodka bottle, although it was filled with a light blue liquid. Somehow forgetting that windshield washer fluid is also blue (and often stored in vehicles), Vodka Bob began swigging the toxic liquid. Seemingly, he never realized that the wiper fluid was not intended for human consumption because he kept ingesting it for the next two days. Ultimately “he died in the hospital from methanol poisoning.”
  • Weirder still, the 2007 Darwin Award winner was Texas Mike, an alcoholic who preferred rectal drinking, i.e., giving himself alcohol enemas. On the eve of his death, Mike poured more than 100 ounces of sherry “right up the old address.” A significant amount to imbibe over a long period of time, when taken all at once it caused Mike to pass out; however, because it was all still resting in his intestines, his body continued to absorb it. Consequently, he died from alcohol poisoning with a blood alcohol content of 0.47% (in many states, at .08% you are considered legally drunk). In response to this tragic story, the Darwin Awards’ kindly readers posted many words of sympathy including “takes shit-faced to a whole new level,” “drunk off my ass,” “rectum? Hell no it killed him,” “up the hatch,” and “a drop never touched his lips.” And if you’re wondering, this same type of thing is why vaporized alcohol inhalation can be extremely dangerous. See: The Good and the Bad of Vaporizing and Inhaling Alcohol
  • With some rare individuals, they can get drunk simply by eating things like bread and pasta. See: Brewing Beer in Your Digestive Tract
Expand for References

Why Do Radio Signals Travel Farther at Night than in the Day?

AM-Radio
Matt T. asks: I was in southern California a while back and managed to pick up an AM radio station from Washington, but only at night. I’ve experienced the same thing at night in Washington picking up an L.A. radio station. In the day, it doesn’t work. So my question to you is why do radio signals seem to travel further at night than in the day?

AM-RadioNot all radio waves travel farther at night than during the day, but some, short and medium wave, which AM radio signals fall under, definitely can given the right conditions. The main reason this is the case has to do with the signal interacting with a particular layer of the atmosphere known as the ionosphere, and how this interaction changes from the nighttime to the daytime.

The ionosphere is a layer of the upper atmosphere about 50 to 600 miles above sea level. It gets its name because it is ionized consistently by solar and cosmic radiation. In very simple terms, X-ray, ultraviolet, and shorter wavelengths of radiation given off by the Sun (and from other cosmic sources) release electrons in this layer of the atmosphere when these particular photons are absorbed by molecules. Because the density of molecules and atoms is quite low in the ionosphere (particularly in the upper layers), it allows free electrons to exist in this way for a short period of time before ultimately recombining. Lower in the atmosphere, where the density of molecules is greater, this recombination happens much faster.

What does this have to do with radio waves?  Without interference, radio waves travel in a straight line from the broadcast source, ultimately hitting the ionosphere.  What happens after is dependent on a variety of factors, notable among them being the frequency of the waves and the density of the free electrons.  For AM waves, given the right conditions, they will essentially bounce back and forth between the ground and the ionosphere, propagating the signal farther and farther. So clearly the ionosphere can potentially play an important part in the terrestrial radio process. But it is the constantly shifting nature of the ionosphere that makes things really interesting. And for that, we’ll have to get a little more technical, though we’ll at the least spare you the math, and we’ll leave out a little of the complexity in an effort to not go full textbook on you.

Ionosphere_LayersIn any event, the ionosphere’s composition changes most drastically at night, primarily because, of course, the Sun goes missing for a bit. Without as abundant a source of ionizing rays, the D and E levels (pictured right) of the ionosphere cease to be very ionized, but the F region (particularly F2) still remains quite ionized. Further, because the atmosphere is significantly less dense here then the E and D regions, it results in more free electrons (the density of which is key here).

When these electrons encounter a strong AM radio wave, they can potentially oscillate at the frequency of the wave, taking some of the energy from the radio wave in the process.  With enough of them, as can happen in the F layer, (when the density of encountered electrons is sufficient relative to the specific signal frequency), and assuming they don’t just recombine with some ion (which is much more likely in the E and D layers in the daytime), this can very effectively refract the signal back down to Earth at sufficient strength to be picked up on your radio.

Depending on conditions, this process can potentially repeat several times with the signal bouncing down to the ground and back up.  Thus, using this skywave, rather than just the normal daytime groundwave, AM radio signals can be propagated even thousands of miles.

Of course, this can become a major problem given that there are only a little over 100 allowed AM radio frequencies (restricted to keep signals interfering too much with one another), but around 5,000 AM radio stations in the United States alone. Given that at night, the signals from these stations can travel vast distances, this is just a recipe for stations interfering with one another. As a result, at night, AM stations in the United States typically reduce their power, go off the air completely until sunrise the next day, and/or possibly are required to use directional antennas, so their specific signal doesn’t interfere with other stations on the same frequency. On the other hand, FM stations don’t have to do any of this as the ionosphere doesn’t greatly affect their signals, which has the side benefit (or disadvantage, depending on your point of view) of severely limiting the range of the FM signals, which rely on groundwave propagation.

If you liked this article, you might also enjoy:

Bonus Fact:

  • fm-amAM Radio (Amplitude Modulation) was the first type of radio broadcasting used for mass-consumption by the public and is still widely used today. (Although AM radio is becoming less widespread in America, it is still the dominant type of terrestrial radio broadcasting in some countries, like Australia and Japan.) This type of signal works with the receiver translating and amplifying amplitude changes in a wave at a particular frequency into the sounds you hear coming from your speakers.  FM Radio (Frequency Modulation), which started coming into its own in the 1950s, is broadcast in much the same way that AM is, but the receiver processes changes in the frequency of a wave, as opposed to the amplitude.
Expand for References

Why is Snow White Given That Snowflakes are Clear?

snowflake
Scott S. asks: When you look at pictures of individual snowflakes, the snowflakes are clear. I was just wondering why is snow white and not clear then?

snowflakeFirst, it’s important to understand what’s going on when we see certain colors. Visible light from the Sun or other light source comes in a variety of wavelengths that human eyes interpret as colors.  When light interacts with an object, the wavelengths that the object reflects or absorbs determines what color our eyes perceive. When an object reflects all the wavelengths of light from the Sun that are in the visible spectrum, the object appears white. Something like a fire truck appears red because the paint reflects back certain wavelengths in the red area of the visible spectrum, while absorbing the rest.

This now brings us to water, snowflakes, and snow. Pure water is quite clear, meaning the wavelengths of light more or less pass right through it, rather than being reflected back to your eyeballs. Individual snowflakes are somewhat clear, but a large concentration of these ends up being white, meaning all the light is reflected back, rather than passing straight through.  So what gives?

The key here is the way that light interacts with the mass of complex shaped snowflakes and air known as snow. Much like with water, light bends when it enters into a piece of ice, causing ice cubes or icicles to appear murky even when made from clean water. The tiny snowflakes, or ice crystals, that make up a snow bank all each bend light somewhat like an ice cube, though not quite as uniformly due to varied and complex shapes.

So when one of these tiny, beautiful ice crystal formations bend light, that light ultimately encounters another ice crystal in the clump of snowflakes where it is also bent, and then another and another. The process continues until the light reflects back out of the snow, rather than passing straight through it to the ground. Some wavelengths do become absorbed in the snow, more so when impurities like dirt are introduced, but with fresh snow, the majority of the light waves will ultimately be reflected, and thus the sunlight will appear white to you.

All that said, you may have noticed that snow can also look blue under the right circumstances. The white appearance happens when light reflects off ice crystals only a relatively small number of times, not penetrating very much into the snow. However, light that manages to penetrate deeper into snow tends to see the longer wavelengths, which exist on the red end of the color spectrum, get absorbed a little bit, leaving the shorter wavelengths on the blue side of the spectrum to be reflected back at you.

So what allows the light to penetrate more deeply in certain snow? How compact it is.  This simultaneously fuses more of the snowflakes into larger bundles of ice and gets rid of much of the tiny air pockets in the snow, resulting in white-ish blue looking snow/ice.

If you liked this article, you might also enjoy:

Bonus Facts:

  • Contrary to popular belief, if you were to view the Sun from space (and wouldn’t damage your eyes in the process), you’d see that the Sun looks white in the human visible spectrum, not yellow as it looks when looking at it from the surface of the Earth.  You can learn more about this here.
  • Polar bears appear mostly white for the same reason snow does.  Their fur is not actually white, but made up of hollow, translucent tubes. The light hits the hair and gets scattered around in a similar fashion to snowflakes, eventually getting reflected back out with very little absorption, making them appear white.  In fact, polar bear skin is actually quite black.  It used to be thought that the combination of translucent tubes of hair and black skin helped keep polar bears warm, but this has since been proven to be incorrect, with the fur doing a pretty amazing job of reflecting all the light off.  In fact, in a 1988 study performed at St. Lawrence University in New York, it was found that a one-fifth inch strand of polar bear hair only manages to conduct 1/1000th of a percent of ultraviolet light directed through the translucent hair tube.  What actually keeps polar bears warm is a combination of a very thick layer of fat (as much as 4.5 inches thick in some cases) and their dense fur.  Their fur and fat is so effective as an insulator that they can easily become overheated in open air, even at extreme negative temperatures.  Their fat layer also allows them to swim around in frigid waters where their fur does nothing to protect them from the cold, which is also why mother polar bears tend to try to avoid taking their cubs for swims until they’ve built up a good layer of fat.  (Incidentally, contrary to popular belief, there is no difference between fur and hair.)
  • In order for a snowstorm to be officially considered a blizzard by the National Weather Service, visibility due to the snow (whether falling or blown around ground-snow) must be reduced to less than a quarter of a mile, the wind must be blowing at more than 35 miles per hour, and the storm must last three hours or more.
  • Georgetown, Colorado currently holds the world record for the largest snowfall in one day. On December 4, 1913, the city became buried in 5.25 feet of snow (about 1.6 meters).
  • The famous Iditarod Dog Sled Race only allows Northern breeds, such as Siberian Huskies and Alaskan Malamutes, to participate, a rule that came about after a competitor entered the race with poodles who ultimately didn’t handle the conditions very well. This rule protects dogs who were not bred to handle the extreme cold weather experienced during the race.
  • The record for the longest amount of time it took a dog sled team to finish the Iditarod is currently 32.5 days. Winners usually complete the race in eight to ten days. The last dog sled team to finish the Iditarod is given the Red Lantern Award. The award originated with the 1953 Fur Rendezvous dogsled race held in Anchorage, Alaska, and it refers to the lantern that is traditionally lit at the beginning of the race and not extinguished until the last team has crossed the finish line.
Expand for References

What Happens to Undeliverable Mail with No Return Address?

letters
Abi K. asks: What does the postal service do when a package is mailed with no valid recipient’s address and no return address?

lettersAccording to the USPS, mail can be considered “undeliverable” due to a number of factors ranging from insufficient postage to the person it’s addressed to refusing to accept it. Regardless of the reason the mail cannot be delivered, the USPS states that: “All nonmailable pieces are returned to the sender.”

This seems like a pretty cut and dry statement that leaves very little room for interpretation. However, there’s always an exception to the rule and in the case of undeliverable mail that exception is things like periodicals, which are deemed to have little to no value after a certain point due to their timely nature and will be disposed of accordingly. That said, publishers can request to have such items returned to them if they so wish.

So what about the case of mail that is undeliverable and happens to have no visible or legible return address? The Postal Service has had measures in place to deal with these so-called “dead letters” almost since the service first began in earnest in the 1700s with the awesomely named position of Inspector of Dead Letters, being created by an act of Congress way back in 1777. The first Dead Letter Office, on the other hand, wouldn’t exist until 1825, when the sheer amount of dead mail necessitated the creation of a dedicated service to deal with it all.

Today, undeliverable mail without a return address will be pre-processed by postal workers and, unless it doesn’t meet certain criteria which we’ll discuss in a moment, will be sent to the Mail Recovery Center in Atlanta. (Up until 1992, the Mail Recovery Center was officially known as the Dead Letter Office, but the USPS opted to change it to better reflect the ultimate goal of returning mail.)

In any event, as noted on their website, the Mail Recovery Center generally doesn’t accept anything worth less than about $25 or something that can’t reasonably be traced back to someone like keys, cosmetics and food. In this case, the items will either be recycled, disposed of or in some cases, donated to charity.

Of course, exceptions can and will be made, but this is handled on a case-by-case basis. For example, if an undeliverable letter or package contains something that clearly has sentimental value (such as photographs or, in one case, postal worker Lori Ferguson-Costa had the pleasure of processing a dead-package containing a jar that in turn contained a placenta) , it will be processed to try to track down the original sender or intended recipient, despite it having no real monetary value.

You might at this point be wondering what about if they find money and despite their best efforts they can’t track down who sent the letter or where to send it to? It will simply be tallied up and given to the US treasury.

So what goes into the “processing” step? First, a Mail Recovery Clerk has to physically open the letters and packages to discern exactly what they contain, making people in this position just about the only individuals in the US legally allowed to open another person’s mail without it being a federal crime. As you can imagine, this means security is tight and everything the clerks find has to be meticulously recorded and noted down to stop someone just walking out of the building with a pocketful of other people’s stuff.

Beyond just opening the packages and letters, the mail clerks in question will go to fairly extensive lengths to try and return or deliver dead mail. In one case noted in a Smithsonian piece on the day in the life of one of these workers, the worker at the center of the article, Vera, opened a package with no identifying marks whatsoever on the outside, but managed to use a phone number contained inside to ultimately track down the package’s original owner.

In another case, the aforementioned postal worker Lori Ferguson-Costa got a call from a woman who’d recently had her baby die of SIDS. (If you’re curious, see: What Causes Sudden Infant Death Syndrome.)  The woman, who was in tears at the time, explained that she’d mailed a photograph of her baby that she didn’t have any copies of, the last one taken before the baby’s death, but the recipient (the grandmother) never got it. Lori then got a detailed description of what was in the photograph and after a pretty extensive search around the Mail Recovery Office manged to find it, despite going off of only the woman’s description of the picture.

Essentially, any clue whatsoever, even small ones, in the package will be followed up on until the clerk manages to find out where the package was intended to go or who sent it.

As for how long this all takes, the usual amount of time the Mail Recovery Office will hold things varies and is handled on a case-by-case basis depending on the item and the clerk’s preference. For example, if the item in question is clearly worth a lot of money or seems like it probably holds significant sentimental value like a wedding dress or jewellery, the Mail Recovery Office will hold onto it for upwards of a year (and in some cases even longer, such as a jar containing the cremated remains of one W.C.G. McLeod who lived from 1891 to 1977, which postal worker Vera stated was there when she got the job and she intends to keep it around until she retires, hoping someone will claim it).

However, since it just isn’t feasible for the Mail Recovery Office to hold onto something indefinitely, in the vast majority of cases, after every avenue to return something is explored and they’ve held onto an item long enough for someone to miss it and file a claim, they are left with no choice but to sell it at in a bulk auction, the profits of which are funneled back to the Postal Service.

So with all their efforts, how successful are they at bringing dead mail back to life? According to official figures, they are able to deliver or return 68% of the dead mail “with obvious value“, a figure which drops to a lower, but not insignificant 43% when you factor in all dead mail. When you consider that only about 0.05% of mail is deemed “undeliverable” in the first place, that’s a pretty remarkable overall delivery rate.

If you liked this article, you might also enjoy:

Expand for References

The Origin and Trademarking of “Couch Potato”

The-Couch-Potato
Gracen R. asks: Why is someone who is lazy called a couch potato?

The-Couch-PotatoIf you want to call someone lazy, a time-honoured way to do so would be to call them a “couch potato”. But why is it we compare lazy people to potatoes and why on Earth did some random guy own the trademark rights to such a silly sounding expression?

Unlike most etymologies, we actually know the exact date that the phrase in question was first written down for mass public consumption as well as the exact date it was first spoken out loud. In regards to the former, the first time the phrase appeared in print was 1979 in an LA Times article where it said, “…and the Couch Potatoes who will be lying on couches watching television as they are towed toward the parade route.” In regards to the latter, according to the man who coined the phrase, he first uttered it during a phone call on July 15, 1976.

More specifically, the man who breathed the phrase into existence is Tom Lacino.  He stated he coined the phrase during “a phone call to a friend. His girlfriend answered, and it was just an off-the-top sort of thing when I said, ‘Hey, is the couch potato there?’ She looked over and there he was on the couch, and she started cracking up.”

Although Lacino claims that there was no real thought behind the phrase, other than that he thought it was a pretty funny way to describe his friend, linguists have noted that the phrase is actually a rather clever play on words. To explain, during the 1970s a popular term for the television was, “the boob tube” which was coined by people who believed watching television was a pursuit only enjoyed by the foolish. Since the edible part of a potato plant is known as a “tuber”, it is commonly believed that the phrase “couch potato” was intended as a clever combination of these two concepts. While that is an incredibly convincing and succinct explanation of the origins of the phrase and is repeated in many an etymological dictionary, we feel it is important to note again that when asked if there was any reason he called his friend a couch potato all those years ago, Lacino offered the verbal equivalent of a shrug in response. Of course, perhaps he simply forgot his reasoning. After all, it’s been nearly four decades since the momentous uttering.

While Lacino was perhaps not initially aware of the playful subtleties behind his utterance, his friend Robert Armstrong certainly was and after hearing the phrase repeated back to him by Tom, Armstrong asked Lacino for permission to turn it into a cartoon.

At the time this was taking place, both Lacino and Armstrong were members of a group calling themselves the “Boob Tubers” which was an orginization created in 1973 in humourous response to a growing health-craze movement in California at the time. In contrast to this movement, the main goal of the Boob Tubers was basically to sit in front of a television and eat junk food. While their goal was certainly a noble one, it didn’t really gain any traction until the phrase couch potato came along.

Armstrong, who was a cartoonist by trade, is the one credited with really pushing the phrase into mainstream lexicon when in 1979 under the new name of “The Couch Potatoes”, he, Tom and several others sailed a float in the Doo Dah Parade (an event meant to parody the more famous Tournament of Roses parade in Pasadena). The float, which literally consisted of nothing more than a few couches pointed at a couple of working TVs on which the group sat for the duration of the parade, was a huge hit and its subsequent media coverage resulted in the first known publicly written use of the term in the aforementioned 1979 LA Times article.

Armstrong, inspired by the success, created a bunch of merchandise around the concept of a couch potato, even going so far as to publish a newsletter aptly called “The Tuber’s Voice: The Couch Potato’s Newsletter“.

Though Armstrong had the foresight to trademark the phrase in 1979, it was simply too popular. Despite his best efforts, the sheer ubiquity of the phrase in media and print meant that he could no longer claim exclusivity to it. As he found out when the New York Times‘ legal department responded to Armstrong’s legal team who stated, “Couch Potato is a registered trademark and not a generic term…”, attempting to get the Times to stop using it as such. The Times legal team responded,

I am afraid that your letter overlooks the fundamental distinction between statements of fact and statements of opinion.  Mr. Safire’s view concerning the wide acceptance of the term “Couch Potato” is clearly an expression of editorial opinion, and, as the Supreme Court has instructed, there is no such thing as a false opinion.  Although you may disagree with this opinion, there is simply no basis for requesting a correction.

Needless to say, neither the New York Times nor other media outlets were moved to stop using the term as a generic slang.

If you liked this article, you might also enjoy:

Bonus Facts:

  • Initially The Couch Potatoes were an exclusively male group; this changed when several members reported that this was annoying their wives. In response, they created another group called “The Couch Tomatoes” for women.
  • In 2005, a group of potato farmers protested for the removal of the phrase “couch potato” from the dictionary citing that its inherently negative connotations were harmful to the image of the potato. Seriously.
Expand for References

The Rubber Band: Holding It Together Since 1820

Rubber-bands
Matthew L. asks: Who invented the rubber band?

Rubber-bandsCheap, reliable, and strong, the rubber band is one of the world’s most ubiquitous products. It holds papers together, prevents long hair from falling in a face, acts as a reminder around a wrist, is a playful weapon in a pinch, and provides a way to easily castrate baby male livestock… While rubber itself has been around for centuries, rubber bands were only officially patented less than two centuries ago. Here now is a brief history of the humble, yet incredibly useful, rubber band.

It has only recently been discovered that Mesoamerican peoples (which includes Aztecs, Olmecs, and Mayans) were making rubber (though they didn’t call it this) three thousand years ago. Mixing milky-white sap known as latex from the indigenous Hevea brasiliensis trees (later called Para rubber trees) with juices from the morning glory vines, they could create a solid that was, surprisingly, quite sturdy. The civilizations used this ancient rubber for a variety of purposes, from sandals to balls to jewelry. In fact, while Charles Goodyear is generally credited with the invention of vulcanized rubber (a more durable and non-sticky rubber compound via the addition of sulfur and heat), it seems that the Aztecs were simply varying the ingredient proportions (between the latex and the morning glory juice) to create different variations in strength.

When Spanish explorers arrived in South America in the 16th century, they discovered for themselves the many uses of this elastic, malleable sap. When the French explorer Charles de la Condamine “discovered” it in the 1740s, he called it “caoutchouc”, a French word, but a variation on the South American word for latex. In attempting to figure out what it was exactly, Condamine came to a wrong conclusion – he thought it was condensed resinous oil. The name “rubber” was only attributed to this latex material when, in 1770, the famed British chemist Joseph Priestley (who also discovered oxygen) noted that the material rubbed pencil marks right off paper, thereby inventing the eraser and giving the “rubbing material” a name. By the end of the 18th century, the material was forever known as “rubber.”

In 1819, Englishmen Thomas Hancock was in the stagecoach business with his brothers when he attempted to figure out better ways to keep his customers dry while traveling. He turned to rubber to develop elastic and waterproof suspenders, gloves, shoes, and socks. He was so enamored with the material that he began to mass produce it, but he soon realized he was generating massive amounts of wasted rubber in the process. So, Hancock developed his “Pickling machine” (later called a masticator) to rip up the leftover rubber into shreds. He, then, mashed the malleable rubber together, creating a new solid mass, and put it into molds to design whatever he wanted. One of his first designs were bands made out of rubber, though he never marketed or sold them, not realizing the practically of rubber bands. Plus, vulcanization hadn’t been discovered yet (which we will discuss in a moment), so the bands would soften considerably on hot days and harden on cold days. In short, these rubber bands simply weren’t very practical at this stage of the game, in terms of many of the types of things rubber bands would later be used for. Hancock didn’t patented his machine or the shreds of rubber it produced, instead hoping to keep the manufacturing process completely secret. This would end up being a rather large mistake.

By 1821, Hancock had perfected his machine, though he would keep it secret for about ten years, in an attempt to dominate the market. In fact, that is why he called it a “pickling machine,” to throw everyone off the scent. It worked. Hancock turned rubber into a commercially practical item and he dominated the market for the next twenty years.

In 1833, while in jail for failure to pay debts, Charles Goodyear began experimenting with India rubber. Within a few years, and after he got out of jail, Goodyear discovered his vulcanization process. Teaming with chemist Nathaniel Hayward, who had been experimenting with mixing rubber with sulfur, Goodyear developed a process of combining rubber with a certain amount of sulfur and heating it up to a certain point; the resulting material became hard, elastic, non-sticky, and relatively strong. A few years later, in 1844, he had perfected his process and was taking out patents in America for this process of vulcanization of rubber. He then traveled to England to patent his process oversees, but ran into a fairly large problem – Thomas Hancock had already patented the nearly identical process in 1843.

There seems to be conflicting reports on whether Hancock had developed the vulcanization process independently of Goodyear or if, as many claim, that he had acquired a sample of Goodyear vulcanized rubber and developed a slight variation on the process. Either way, Hancock’s patent stopped Goodyear from being able to patent his process in England. The ensuing patent battle dragged on for about a decade, with Goodyear eventually coming to England and watching in person as a judge proclaimed that, even if Hancock had acquired a sample prior to developing his own process for this type of rubber, as seems to have been the case, there was no way he could have figured out how to reproduce it simply by examining it. However, famed English inventor Alexander Parkes claimed that Hancock had once told him that running a series of experiments on the samples from Goodyear had allowed him to deduce Goodyear’s, at the time, unpatented vulcanization process.

But in the end, in the 1850s the courts sided with Hancock and granted him the patent, rather than Goodyear, quite literally costing Goodyear a fortune; had they decided otherwise, Goodyear would have been entitled to significant royalties from Thomas Hancock and fellow rubber pioneer Stephen Moulton.

Though he had a right to be bitter over the ruling, Goodyear chose to look at it thusly, “In reflecting upon the past, as relates to these branches of industry, the writer is not disposed to repine, and say that he has planted, and others have gathered the fruits. The advantages of a career in life should not be estimated exclusively by the standard of dollars and cents, as is too often done. Man has just cause for regret when he sows and no one reaps.”

Goodyear, though eventually receiving the credit he deserved, died in 1860 shortly after collapsing upon learning of his daughter’s death, leaving his family approximately two hundred thousand dollars in debt (about $5 million today).

The patent dispute with Goodyear also had a profound, ultimately negative, effect on Hancock as well. As he was entangled in the time-consuming mess for years, others began to reap the benefits on Hancock not patenting his masticator process nor patenting the seemingly useless bands that they created. Specifically, in 1845, Stephen Perry, working for Messers Perry and Co, Rubber Manufacturers of London, filed a patent for “Improvements in Springs to be applied to Girths, Belts, and Bandages, and Improvements in the Manufacture of Elastic Bands.” He had discovered a use for those rubber bands – holding papers together. In the patent itself, Perry distances himself and his invention from the ongoing vulcanized rubber dispute by saying,

“We make no claim to the preparation of the india rubber herein mentioned, our invention consisting of springs of such preparation of india rubber applied to the articles herein mentioned, and also of the peculiar forms of elastic bands made from such manufacture of india rubber.”

While the rubber band was invented and patented in the 19th century, at this point it was mostly used in factories and warehouses, rather than in the common household. This changed thanks to William Spencer of Alliance, Ohio. The story goes, according the Cincinnati Examiner, that in 1923, Spencer noticed the pages of the Akron Beacon Journal, his local newspaper, were constantly being blown across his and his neighbors’ lawns. So, he came up with a solution for this. As an employee of the Pennsylvania Railroad, he knew where to acquire spare rubber pieces and discarded inner tubes – The Goodyear Rubber Company also located in Akron. He cut these pieces into circular strips and began to wrap the newspapers with these bands. They worked so well that the Akron Beacon Journal bought Spencer’s rubbers bands to do the deed themselves. He then proceeded to sell his rubber bands to office supply, paper goods, and twine stores across the region, all the while continuing to work at Pennsylvania Railroad (for more than a decade more) while he built his business up.

Spencer also opened the first rubber band factory in Alliance and, then, in 1944 the second one in Hot Springs, Arkansas. In 1957, he designed and patented the Alliance rubber band, which ultimately set the world rubber band standard. Today, Alliance Rubber is the number one rubber band manufacturer in the world, churning out more than 14 million pounds of rubber bands per year.

So, next time you are shooting a friend with this little elastic device, you can thank the Mayans, Charles de la Condamine, Thomas Hancock, Charles Goodyear, and William Spencer for the simple, yet amazingly useful rubber band.

If you liked this article, you might also enjoy:

Expand for References

What are Smelling Salts?

ammonium-carbonate
David A. asks: What exactly are smelling salts? Do they really work to wake up unconscious people?

ammonium-carbonateSmelling salts have been used for everything, from reviving those who have fainted to athletes needing a chemically-induced “wake up.” But what are smelling salts? Are they actually an effective medical treatment? How do they work? Are they toxic and dangerous?

Smelling salts aren’t what most people think of as “salt”; there isn’t sodium in smelling salts. The main and most active ingredient is ammonium carbonate ((NH4)2CO3H2O), a solid chemical compound that, when mixed with water (H2O), releases ammonia gas. Ammonium carbonate also goes by “baker’s ammonia,” due to the fact that it was used as a leavening agent prior to the popularity of baking soda or powder in the early to mid-19th century. In fact, baker’s ammonia is still used in a few traditional Scandinavian Christmas time recipes like Speculoos (spiced shortbread biscuit) and Lebkuchen (similar to a gingerbread cookie). Often, another main ingredient in smelling salts is some component that masks the terrible smell of ammonia gas, usually perfume or, even, flowers.

There are cheaper imitation forms of “smelling salts,” which consist of diluted ammonia dissolved in water, along with ethanol and perfume. As the British Journal of Sports Medicine points out, these types of mixtures are not actually smelling salts and would be more accurately termed  “aromatic spirits of ammonia.”

Smelling salts work because the human body aggressively reacts to the ammonia gas in several ways. When sniffed, the gas irritates the nostril membranes and lungs, so much so that it triggers a sharp inhalation reflex, bringing in more air and thus more oxygen. This can result in improved alertness. When a person passes out, they sometimes lose consciousness due to decreased blood flow to the brain. Sniffing smelling salts can raise a person’s blood pressure, heart rate, and oxygen levels, helping brain activity and reactivating the sympathetic nervous system.

While medical research does confirm that sniffing smelling salts by someone who has had a fainting spell can in some cases be beneficial, the question of the toxicity of ammonia gas remains. Exposure to large amounts of ammonia gas can cause lung damage, blindness, and even death. It is also highly explosive and corrosive. The Occupational Safety and Health Administration (OSHA) even set a 15 minute safe exposure level for highly concentrated ammonia gas.

That being said, the amount of ammonia gas that is being breathed in with a snort of smelling salts is minimal and only causes its intended effect – irritation of the nose and lungs. There has never been a known case of someone dying of ammonia gas poisoning due to using smelling salts.

If you’re curious who came up with this method of reviving unconscious individuals, the Romans were the first to use smelling salts to “awaken” the senses. The writings of the Roman philosopher and author Pliny the Elder mention Hammoniacus sal, most prominently in his encyclopedia Natural History. There is debate if it is related to the 13th century word sal ammoniac, a rare mineral made out of ammonium carbonate that was well-known to alchemists. Written about by Albertus Magnus and other alchemists, sal ammoniac was experimented with and distilled in their attempts to discover the philosopher’s stone. (Similar experiments searching for the philosopher’s stone, this time playing with massive amounts of human urine, gave us the first element discovered since ancient times.)

More practically, sal ammoniac was used in the Middle Ages to change the color of vegetable dyes. In the 17th century, it was discovered that a liquid solution of ammonia could be distilled from the shavings of deer hooves and antlers. When crystallized, it was seen to also have carbon, making ammonia carbonate – then called “salt of hartshorn” when made in this way.

Britain’s Victorian era brought along the use of smelling salts as a medical treatment. Police officers and emergency workers carried smelling salts around as “lady revivers” – presumably because females fainted more often than men generally due to wearing clothing items that restricted breathing. As expressed in this 1878 manual, written by Surgeon Major Dr. Peter Sheppard of Britain’s Army Medical Department, entitled The Treatment to Restore Natural Breathing and Circulation:

“Rule 4 – “To Excite Inspiration – During the employment of the above method excite the nostrils with snuff or smelling salts, tickle the throat with a feather. Rub the chest and face briskly, and dash cold and hot water alternately on them.”

Beginning in the early 20th century, boxers began using smelling salts during boxing matches to keep themselves alert after a particular hard blow to the head. In theory, it “revived” the fighter enough that he would stay conscious long enough to finish the match. More recently, smelling salts have been banned in competitive boxing, first in Britain (in the late 1950s) and then in America (in the 1960s). This isn’t due to the inherent dangers of ammonia gas, but rather that it potentially hides a more serious injury. The reasonable assumption goes that if someone needs to be revived by the use of smelling salts, then there is a much larger medical issue (head trauma, concussions, neck issues) at stake and they should not be going back into the ring.

While smelling salts have been banned in boxing for years, they are still legal in other sports. In fact, there has been in a rise in use of them, especially in football and hockey, where jarring hits can make one woozy. Peyton Manning, Michael Strahan, Landon Donovan, Alexander Ovechkin, Samuel Eto’o, Brett Favre, and Tom Brady are just a few of the more prominent players who have either admitted to or have been photographed sniffing smelling salts on the sidelines.

In 2005, the Florida Times-Union (out of Jacksonville, Florida) published a story about the rise of smelling salts usage, with many players admitting that they use them before every game, even multiple times throughout the game, because “Players speculated that (smelling salts) raised their adrenaline levels by a factor of 10.” Of course, there is debate on whether or not this is really just all in their heads. Outside of the placebo effect, they could, very likely, get the same physiological results from simply taking several deep breaths, and without needing to regularly inhale a highly toxic gas.

Beyond whether it’s actually doing anything helpful or not, one company that produces the smelling salt tablets (usually how smelling salts are sold today) released a statement at the time that said “ammonia inhalants are used for inhalation only to prevent or treat fainting.” A spokesperson for the company elaborated on the statement for the article and said, “that if the players are utilizing the cartridges to clear their nasal passages or as a performance enhancer, then they are not using them in the designed manner. They were constructed to be used very sparingly, and only to wake up an unconscious person.”

If you liked this article, you might also enjoy:

Expand for References
1 2 3 4 20