top of page

Chains of Consequences: How 17th Century Candles Ruined American Television

By Tom Anderson

A picture of a candle taken by Petar Milošević and shared under the CC BY-SA 4.0 licence

When a British traveller visits the United States for the first time, in my experience there is an almost comically similar progression of culture shocks. Beyond the obvious, like cars driving on the wrong side (and historically being larger, although this is less dramatic nowadays) this consists of 1) outrage about sales tax being added to the stated price in shops, 2) bemused surprise at portion sizes in restaurants and, historically, free refills, and 3) outrage again over how tipping is seen not as a small voluntary gift on top of a waiter’s wages, but as a large, compulsory additional charge without which, in many cases, waiters would not be paid. Fourth on the list comes when our traveller returns to his or her hotel room, switches on the television and is confronted with the numbing omnipresence of TV advertising in the US. This likely happens with travellers from many countries, but is all the more dramatic for British tourists, who are used to watching the BBC without any advertising at all. Why, they opine on the edge of sanity after a few hours of this, are there adverts (or a commercial break, in local terminology) every seven and a half minutes? Why do breaks come at ridiculous times, like right after the opening credits or right before the closing credits? Why are they allowed to advertise drugs and what did that small print say? Why has America not had another revolution to get rid of this?

We can, at least, answer the less histrionic of these questions. (And, of course, we should note that American companies have since invented services like Netflix or Amazon Prime, where the appeal is not necessarily that a small monthly charge gives one TV on demand, but that it gives one TV without interruptions) The answer to this challenge involves the limitations of a technology that ceased to be mainstream long before television was invented. The humble candle.

Title page to the lectures when printed as a book in 1861

For us in the electrically-lit 21st century, it is easy to see candles as a mere novelty. Indeed, even in the 19th century many people were relying on superior lighting methods such as oil lamps and gaslights. But for a large part of human history, candles were the cheapest and most widespread means of ensuring that life did not have to be limited to the hours of the sun. In 1848, the great scientist Michael Faraday presented what would become the Royal Institution Christmas Lectures to an audience of children from the ordinary public, which continue to this day (and are now televised). For his subject, Faraday picked an everyday technology that every child would be familiar with, exploring the scientific principles behind it. Today, he might pick a smartphone; then, he chose a candle. The Chemical History of a Candle is still, to this day, considered one of the exemplars in how to do scientific outreach to a young audience without dumbing down or being patronising. Some of the ideas Faraday used still survive in primary and secondary school curriculums for teaching pupils about the principles of energy. Ironically enough, Faraday’s own work on electromagnetism would be the beginnings of the principles that later scientists and inventors would use to produce electric lighting—reducing the candle to the novelty we see it as today.

Historically, most candles were made from tallow, which remained the cheapest method even after better waxes were discovered. While mediaeval churches used beeswax candles, these were too expensive for the common folk, who continued to use tallow candles. Tallow is rendered-down beef or mutton fat, which is a solid rather than liquid at room temperature. A wick, a length of flammable thread, is repeatedly dipped in tallow to build up the width of the candle and give it the hydrocarbon fuel it will need to burn for hours. This technique is believed to have been first invented by the Romans around 500 years before Christ. The Han Dynasty of China also developed candles a few centuries later, relying on whale fat rather than tallow as their fuel. It would not be until the eighteenth century that better sources for waxes would be found (such as rapeseed oil).

When a candle’s wick is lit, the heat melts and vaporises the wax or tallow fuel near the top, which then ignites to produce a constant flame. This represents an exothermic combustion reaction between the hydrocarbon fuel and the oxygen in the atmosphere, producing carbon dioxide waste; incomplete combustion can also lead to carbon monoxide. Much of the light of a candle comes from particles of un-combusted carbon glowing in the flame; if one puts the blade of a knife in a candle flame, it will be blackened by this carbon soot. Candles therefore tend to stain their surroundings with soot if used in large numbers and for long periods. The principle behind a candle is therefore the same as that of a burning gas jet in a gas cooker (for example), it is merely that the fuel is initially solid (so safer to handle) and is only converted to gas in small quantities at a time as it melts. For safety, a gas jet is set up for complete combustion (i.e. not producing dangerous quantities of carbon monoxide, only carbon dioxide) which means the flame of a gas cooker is mostly faint blue and almost invisible—it lacks many glowing carbon particles. It is therefore no use as a lighting source, only as a source of heat. We should note that the principles of combustion and oxygen were only discovered in the eighteenth century (for more on which see my alternate history book Diverge and Conquer), so for much of history candles were used without a full understanding of how they functioned.

Picture of Chandeliers in a large billiard hall taken by Lez Franiak and shared under the CC BY-SA 4.0 licence

Though a convenient, cheap and relatively safe means of lighting, candles come with a number of disadvantages. They are not very bright, meaning they need to be used collectively to light a room effectively—leading to the invention of the large ceiling candle holder, or ‘chandelier’ (from the French word for tallow candle, chandelle, interestingly French has an entirely different word for the historically more expensive wax candle, bougie). Even though candles may take a long time to burn down, they can also drip hot tallow or wax, which is not only wasted fuel but can also potentially fall on people below them causing pain and stains. Their wicks must be trimmed regularly to reduce the amount of soot produced (which is partly from the wick itself rather than the fuel). Making candles (being a ‘chandler’, the origin of that name) became a protected craft in the Middle Ages, with its own guilds. Many guilds survive today in the City of London’s Livery Companies, and they include the Worshipful Company of Tallow Chandlers (ranked 21st in precedence, founded 1462) and the Worshipful Company of Wax Chandlers (ranked 20th, Royal Charter 1484 but already existed). One can see from this how wax candlemaking was considered a more prestigious business, reflecting the greatest expense of its products and social elevation of its clientele. Today, appropriately enough, the modern descendants of the two rival companies celebrate Candlemas together on February 2nd—the date on which the presentation of the infant Jesus Christ in the Temple in Jerusalem (Luke 2:22-40) is commemorated. Candles are identified with Jesus as the Light of the World, which in the modern UK is seen at Christmas with the Christingle, an orange with a lit candle in it (a surprisingly recent imported tradition, arriving in the UK from Germany in 1968).

The aforementioned issues with using candles for lighting became a particular problem for theatrical performances when these boomed as a form of popular entertainment in the sixteenth and seventeenth centuries. Candles were more often used to light small private theatres for the wealthy, but plays were often written for such patrons first before being performed to the general public. In order to minimise the problems with candles, a play therefore needed short breaks in order to trim the wicks, scrape away dripping wax, and relight or replace any snuffed-out candles. The principle of a five-act structure for a play had already existed, being inherited from the Ancient Greeks, but this lighting necessity ensured that five acts would remain the norm for plays, with candle maintenance taking place in between each act. There were no formal intervals or intermissions in Shakespeare’s time, with his plays and those of his contemporaries typically performed all the way through. This led to some inventive writing for how to incorporate these breaks. For example, the playwrights Thomas Middleton (1580-1627) and William Rowley (1585-1626) wrote the play The Changeling (1622) in which the character Deflores hides a plot-important rapier in the gap (called the act-time) between Acts II and III!

Other methods sprung up to cover this break (which could also be used to shift sets and for costume changes, as the audience came to expect the regularly spaced gaps). Incidental musical performances or short additional play pieces (as in the French entr’acte and the Spanish entremés) could also fill the gap. Sometimes these were plot-relevant, such as covering what other characters were up to at the time, but other times they were simply unrelated time-filling activities. Formal intervals or intermissions in the sense we know today, as in rest breaks for the audience, may have started in France. They became more popular in the eighteenth century. Most French plays of this era have acts of a very time-consistent length as a result; French writers and drama theorists Jean-François Marmontel argued that an intermission should not be interpreted as a breaking of the play’s action or suspension of disbelief, but that a play should be regarded as continuing in real time while the audience rested. This conceit was not universal, and today plays are often portrayed as freezing at a plot-important moment before an intermission and then resuming it. Because intervals only came about after the era of Shakespeare, modern interpreters have to arbitrarily decide where to insert them in his plays (and those of his contemporaries).

Of course, when cinema arrived at the turn of the twentieth century, there was a new reason for intervals or intermissions—the time needed to change film reels. Just as they had for over a century in theatre, this was also an opportunity for the theatre or cinema’s owners to sell overpriced refreshments to the punters. In the United States, an iconic animation called “Let’s All Go to The Lobby” (showing anthropomorphised foods and drinks) was made in the 1950s to encourage punters to do so mid-film; decades later, this inspired the Adult Swim cartoon Aqua Teen Hunger Force. Nowadays, there is no technical reason for a film to have breaks, and even though films have grown longer on average, there is usually no provision for a comfort break. Indian cinema is an exception, having maintained the norms of 1950s cinema (in part because selling interval refreshments helps keeps cinemas afloat) and Indian films typically even have intervals even when shown in the West. There are those of us who wish western cinema did similarly not force viewers to choose whether to risk bursting their bladder in a two-and-a-half hour film in dire need of an editor’s knife or to miss a crucial plot point with a comfort break.

An advert for a Candle.

The five-act structure of plays was applied in turn to radio dramas, which in the modern United States are usually treated as a vanished form of art, but are still popular and mainstream in the UK and many other countries. Of course, the breaks in between the acts could no longer be used to physically sell popcorn and drinks to the punters—or could they? Indirectly, it was still possible to sell products—the breaks could now be used for advertising companies who would pay to run their ads there! Indeed, early American radio was noted for its blatant advertising, even outside breaks—the term ‘soap opera’ comes from slice-of-life series paid for by soap powder companies that were full of less than subtle product placement. The length of each act was maintained from the tradition that had grown up over centuries of drama, directed by the limitations of candles, a form of lighting long rendered obsolete by this point. And, in turn, the five acts and the breaks in between were carried over to television as well.

However, American television schedules were built on one-hour chunks. As a result, the old model was halved, meaning that commercial breaks came way too often—but then, the alternative was to lose precious advertising revenue. American TV dramas are frequently described as ‘one-hour’ format, when the reality is anything but—the programme on a DVD or a streaming service is just 42 minutes, using barely more than two-thirds of that time. The rest is advertising. ‘Half-hour’ American shows, usually sitcoms, are just 21-23 minutes long. The difference becomes all the more stark when one considers the ability to marathon or binge episodes of a programme via box set or online. Watching 6 episodes back-to-back of an ‘hour-long’ American drama takes barely more than four hours, assuming one does not pause to insert one’s own comfort breaks! This can cause problems when US TV networks attempt to run programmes bought from overseas, in particular from channels like Britain’s BBC without any advertising breaks at all. The drama Hustle, about a group of con artists (for example) was an hour long, and that really meant sixty minutes. As a result, while it was successfully sold to the US, it was only shown by the cable service AMC, which had less advertising than the US norm (and even they cut down the episodes to 52 minutes). When Doctor Who returned to British screens in 2005, it was pointedly now based on 45-minute episodes to make it easier to sell to the American market.

So, if you are like my friend swearing repeatedly at the hotel TV for having to sit through three advert breaks before Jay Leno gets to interview Lisa Kudrow, you know who to blame – seventeenth century candlemakers.

Thanks to Alex Richards and Angelo Barthelemy for suggesting this topic.



bottom of page