Defeating the Nazis- The Twisting, Turning Road That Gave Us the Microwave Oven
Ahhh the microwave oven: perhaps the most con of mod cons. Whether you’re zapping a mug of lukewarm coffee back to life, rewarming some leftovers, or heating up a frozen burrito for the fourth time this week, this handy little device will get the job done quickly and efficiently without the muss and fuss of a regular stove or oven. But have you ever wondered how this seemingly magical box manages to cook your food without flames or heat – or who invented it in the first place and why? Well, the answer may surprise you, for the key working component in this appliance found in every kitchen, break room, and hotel suite was once the most closely-guarded military secret in the world, a key technological breakthrough that was instrumental in winning the Second World War and shaping the modern world as we know it. This is the long and surprisingly fascinating story of the microwave oven.
The origins of the microwave oven stretch all the way back to the discovery of radio waves. In 1865, Scottish physicist James Clerk Maxwell published the groundbreaking paper A Dynamical Theory of Electromagnetic Fields, in which he postulated that visible light is composed of interacting electric and magnetic fields and laid out the physical laws – today known as Maxwell’s Equations – governing the behaviour of these waves. Maxwell’s theories predicted the existence of an entire spectrum of electromagnetic waves, varying in frequency and wavelength. In 1888, German physicist Heinrich Hertz – whose name is now the official unit of frequency – succeeded in generating and detecting long, low-frequency electromagnetic waves, confirming Maxwell’s predictions. Hertz’s discoveries not only inspired later inventors like Guglielmo Marconi to develop wireless communications – aka radio – but also laid the groundwork for the discovery of the rest of the electromagnetic spectrum. While the spectrum is continuous, for the sake of convenience it is usually divided into seven main regions or bands according to wavelength: radio waves down to 10 metres; microwaves, from 1 metre to 1 millimetre; infrared light, from 100 to 1 micrometers; visible light, from 700 to 400 nanometers; ultraviolet light, from 400 to 121 nanometers; X-rays, from 10 nanometers to 100 picometer; and finally gamma rays, 10 picometers and below.
In the 1910s and 20s, it was discovered that high-frequency electric currents and shortwave radio signals could induce heating in a variety of materials – including human tissue. This led to the development of diathermy, a method for electromagnetically heating muscles and other deep tissue still widely used in physiotherapy to this day. The application of this technology to the field of cooking was readily apparent, and at the 1933 Century of Progress Exhibition in Chicago, electrical giant Westinghouse demonstrated the rapid cooking of foods like steak and potatoes by placing them between two metal plates attached to a 10 kilowatt, 60 megahertz shortwave radio transmitter. But while impressive, the required equipment was bulky, expensive, and dangerous, and the technique failed to take off. The dream of cooking food rapidly without flames would have to await the development of a revolutionary piece of technology, whose invention was driven by the needs not of medicine or gastronomy, but of war.
Our story properly begins in 1934, as Europe was once again gearing up for war. The inter-war period saw huge leaps in aviation technology, with aircraft going from rickety contraptions of wood, wire, and fabric to sleek, powerful, all-metal machines capable of flying at speeds of hundreds of kilometres per hour and altitudes of tens of thousands of metres. So advanced were these new weapons that military planners boasted that “the bomber will always get through” and that future wars would be ended swiftly and decisively by destroying the enemy’s military and industrial assets from the air. In the United Kingdom, these developments caused a not significant amount of anxiety, as the nation had already gotten a taste of the horrors of aerial bombardment during the first World War. Unfortunately, the Royal Air Force’s infrastructure for detecting and tracking incoming enemy bombers had barely evolved over the previous two decades, depending largely on the human aircraft spotters of the Royal Observer Corps.
Sound locating devices called acoustic mirrors were also installed at several RAF bases, but these were of dubious utility. While this equipment could give an incoming aircraft’s bearing, it could not measure its range, while the faint sound of distant aircraft was easily drowned out by other noises. Furthermore, as sound travels at slightly over 1,200 kilometres per hour and even bombers of the era could reach over 400 kilometres per hour, an aircraft’s position would have changed significantly by the time it was detected.
This forced the Royal Air Force to adopt the strategy of Standing Patrols, whereby fighters continuously circled likely approach routes to major targets, being relieved by other aircraft when their fuel ran out. As can be imagined, this was a very inefficient strategy, extremely costly in terms of aircraft, engine hours, and fuel, so in the late 1930s the RAF introduced specialized interceptor aircraft like the Hawker Fury and Hawker Hurricane which could remain on the ground and, upon receiving an attack warning, quickly take off and climb to operating altitude. However, there could never be enough of these aircraft to cover the entirety of the UK; some means of detecting incoming aircraft was still needed in order to vector the interceptors onto their targets.
Thankfully, a solution soon appeared in the form of Radio Detection or RD – what today is widely known as Radar. While the invention of radar is typically credited to the British, like all big, important ideas it was actually developed independently in several different countries at around the same time. During his 1888 experiments with radio waves, Heinrich Hertz discovered that these waves could be reflected off metallic objects. Similar effects were later observed by radio pioneers such as Marconi and British engineer Charles Samuel Franklin, who in a 1922 paper delivered before the Institution of Electrical Engineers wrote:
“I also described tests carried out in transmitting a beam of reflected waves across country … and pointed out the possibility of the utility of such a system if applied to lighthouses and lightships, so as to enable vessels in foggy weather to locate dangerous points around the coasts … It [now] seems to me that it should be possible to design [an] apparatus by means of which a ship could radiate or project a divergent beam of these rays in any desired direction, which rays, if coming across a metallic object, such as another steamer or ship, would be reflected back to a receiver screened from the local transmitter on the sending ship, and thereby immediately reveal the presence and bearing of the other ship in fog or thick weather.”
Indeed, such a device had already been patented in 1904 by German engineer Christian Hülsmeyer under the name Obstacle Detector and Ship Navigation Device. Based on contemporary radio technology, the device used a simple high voltage spark gap and a coherer receiver to project a burst of radio waves and listen for the echo, allowing ships, icebergs, and other hazards to be detected at night or in dense fog at ranges up to 3 kilometres. It could also measure the approximate range of a contact by triangulating the strength of the echoes from various angles. But while now widely recognized as the first practical radar set, unfortunately Hülsmeyer’s invention failed to attract any military or commercial interest.
But in 1933, physicist Rudolf Kühnhold, scientific director of the Kriegsmarine or German navy, began experimenting with radio devices for detecting and tracking ships at sea. After some initial failed experiments, he joined forces with amateur radio operators Paul-Günther Erbslöh and Hans-Karl Freiherr von Willisen and formed a private company called GEMA. In June 1934, the company built a 70-watt transmitter and receiver operating at a frequency of 600 megahertz and wavelength of 50 centimetres and succeeded in detecting large ships in Kiel harbour at a distance of 2 kilometres. Unlike Christian Hülsmeyer’s device, the range of contacts was determined by pulsing the radio beam and measuring the time it took for each pulse to travel back and forth. The echoes were also displayed visually on a Braun cathode-ray tube. In September 1935, GEMA successfully demonstrated their equipment to the Kriegsmarine, who immediately ordered it into production as the Seetakt, the world’s first operational ship-borne radar. The technology was also later adapted into the Freya early warning radar to defend German territory against enemy bombers. But while the Germans initially led the world in radar technology, development work was ordered halted shortly after the outbreak of the Second World War. This was because German leaders assumed the war would be over in a matter of months; therefore, advanced technologies like radar and jet propulsion would not be needed.
Meanwhile in Britain, the development of radar was spurred by the most unlikely of circumstances. In 1934, articles appeared in German newspapers claiming that the German military was developing a “death ray” that used powerful radio beams to shoot down enemy aircraft. Intrigued – and apparently desperate for any military advantage, however far-fetched – Dr. Harry E. Wimperis, the Air Ministry Director of Scientific Research, contacted Dr. Robert Watson-Watt of the National Physical Laboratory’s Radio Research Station in Slough and asked him if such a death ray was feasible. Watson-Watt passed on the task to his colleague Arnold Wilkins, who later recalled:
“I received the request on a piece of torn off calendar. Watson Watt liked using that sort of thing being an economical Scotsman. The request, I soon deduced, was to assess the feasibility of a Death Ray, as he was asking me to calculate the amount of radio energy which must be radiated to raise the temperature of a man’s blood to fever heat at a certain distance.”
To the disappointment of science fiction fans everywhere, it took Wilkins less than half an hour to calculate that the power requirements of a working death ray were far too high to be practical. However, in his report he asked if there was anything else the Radio Research Station could do to help the Air Ministry. He soon remembered reading a Post Office report from the previous year claiming that an aircraft flying across an experimental VHF radio telephone link had caused the received signal to fade, and wondered whether such a radio beam could be used to detect and track aircraft at long distances. Discussing the matter with Watson-Watt, Wilkins realized that if they used a radio beam with a wavelength of 50 metres, the 25-metre metal tubing wings of the Handley Page Heyford – the RAF’s standard bomber at the time – would act as a half-wave dipole antenna, reflecting back a much stronger signal than previously assumed. Intrigued by this possibility, the Air Ministry Defence Committee commissioned Watson-Watt to prepare a scientific report on radio detection or RD, which he submitted on February 12, 1935 under the title Detection and Location of Aircraft by Radio Methods. In this remarkable paper, Watson-Watt not only laid out the fundamentals of radar, but predicted other technologies such as radio-based Identification Friend or Foe or IFF systems for aircraft. Impressed, the Air Ministry requested a live demonstration of his theories, scheduled for February 16.
In classic British make-do-and-mend fashion, Watson-Watt was forced to beg, borrow, and steal much of the equipment needed for this demonstration. For the transmitter he used the BBC’s shortwave radio station at Daventry, transmitting at a frequency of 6 Megahertz and a wavelength of 49 metres; and for the receiver he borrowed equipment built by the Radio Research Station’s Sir Edward Appleton to study the earth’s ionosphere and packed it into an ancient Morris Commercial van. He and Wilkins drove the van to a field outside the town of Weedon, near Daventry, where they set up their receiving aerials. The following day, witnessed by Air Ministry observer A.P. Rowe, the pair hunched over a glowing cathode ray tube display in the cramped van while RAF pilot Flight Lieutenant F.L. Blucke took off and flew his Heyford bomber at 3,000 metres on a predetermined course between Daventry and Weedon. As the bomber lumbered past at 144 kilometres per hour, the team watched as the glowing green dot slowly climbed up the tube, the signal only being lost when the aircraft was 13 kilometres away. According to legend, Watson-Watt triumphantly declared: “Britain is once more an island!”
With the basic principle of Radio Detection proven, Watson-Watt received £10,000 from the Air Ministry – an enormous sum in those days – to further develop the technology. In May 1935, he and his colleagues set up an experimental establishment on an island at Ordfordness in Suffolk, officially named the Ionospheric Research Station to conceal its true purpose. Progress was incredibly swift; after only a month the team succeeded in tracking a Westland Wallace aircraft out to a range of 27 kilometres, while just a week later they accidentally detected a flight of three RAF Hawker Hart fighters at a range of 32 kilometres and discovered they could track each aircraft individually. The team soon established the optimal wavelength for long-distance detection – 10 metres – and a convenient method for displaying the returned echoes. The cathode ray tube displays were fitted with a time base circuit that made a phosphorescent dot oscillate rapidly across the screen, forming a horizontal line. Echoes returned from aircraft would deflect the dot upwards, creating peaks along the line. The closer the contact, the further left along the line the peak formed, allowing the aircraft’s range to be determined. Altitude was measured by comparing the return of signals to two horizontal antennas mounted one above the other, while bearing was measured by comparing returns to two crossed aerials. This allowed the position of an incoming aircraft to be determined in three dimensions with a high level of accuracy at up to 200 kilometres. However, at the original signal pulse rate of 25 per second it could not detect aircraft at ranges under 30 kilometres; to solve this, the rate was increased to tens of millions of pulses per second. At this point the technology was still known simply as Radio Detection, but American researchers later gave it the name Radio Direction-finding And Ranging – or RADAR for short.
With all the technical details worked out, in early 1936 Watson-Watt’s team began constructing the first operational radar station at Bawdsey Manor in Suffolk, the transmitting and receiving aerials being mounted on 100-metre-tall steel and wooden towers. Initially used for technical development and training of future radar crews, in May 1937 the Bawdsey Manor installation became the first site of the Chain Home or CH radar network designed to defend England’s southern and eastern coastlines. By the time it became fully operational in the spring of 1939, Chain Home totalled 20 radar sites stretching from the Isle of Wight to the Firth of Forth. To detect low-flying aircraft, a second parallel network called Chain Home Low based on gun-laying radar sets operating at a wavelength of 1.5 metres was also installed. Along with the Royal Observer Corps, Chain Home formed the early warning component of the so-called Dowding System, named after Air Chief Marshal High Dowding, commander of RAF Fighter Command. The coordinates and heading of enemy bomber formations were transmitted from Chain Home and ROC stations to a filter room at Fighter Command Headquarters at Bentley Priory in London, where ground controllers – many of them members of the Women’s Auxiliary Air Force or WAAFs – plotted this information on a giant map and guided the appropriate fighter units to make the interception. By providing early warning and directing aircraft to specific targets, the Dowding System allowed Britain’s limited fighter force to be deployed with maximum efficiency and effectiveness.
Compared to contemporary German and later wartime designs, Chain Home was relatively primitive and inefficient; instead of mechanically scanning a narrow beam, it projected a giant “floodlight” across the entire sky – a system which consumed huge amounts of power. Despite this, however, the network used readily-available technology, allowing it to be made operational just in time for the outbreak of the Second World War – a fact which would soon grant the British an enormous strategic advantage.
Meanwhile, the Germans watched with curiosity as the Chain Home towers began popping up along the English coast. While they assumed they had something to do with aerial defence, the stations looked so different from German designs that they could not be certain. So, in the spring of 1939, General Wolfgang Martini, head of the Luftwaffe signals branch, ordered the mysterious towers investigated. For this mission, he commandeered an unusual vehicle: the LZ-130 Graf Zeppelin, the last German rigid airship and the sister ship of the ill-fated Hindenburg. Fitted with signals intelligence equipment, Graf Zeppelin left its hangar in Frankfurt on July 12, 1939 and headed for Bawdsey on the English coast, cruising just out of sight of the experimental station. Yet instead of the pulsed radar signals they were expecting, the German radio operators just heard random static. No matter how they adjusted their equipment, they could detect nothing. After cruising up and down the English coast for several hours, the airship gave up and returned home empty-handed. She returned early the next month on another spy mission but was equally unsuccessful, leading the Germans to conclude that the British had no working early-warning system. Of course, Chain Home was, in fact, fully operational, and its operators accurately tracked the giant airship throughout its two spy flights. So why didn’t Graf Zeppelin detect their signals? The simple fact was that German radars worked at wavelengths of around 50 centimetres, and they expected the British equipment to be similar. But Chain Home worked on a much longer 10 metre wavelength; the Germans simply hadn’t set their detection equipment to the right frequencies.
This oversight was to have dire consequences in the summer of 1940 when the Luftwaffe set out to destroy the Royal Air Force on the ground and force the British to negotiate peace terms – an aerial struggle now known as the Battle of Britain. Though early in the battle the Germans made some dive-bombing raids against the Chain Home radar stations, this campaign was half-hearted and quickly abandoned. Not only did the Germans fail to recognize the vital importance of the stations, but the installations themselves proved difficult to attack and any damage inflicted was quickly repaired. This failure was to cost the Germans dearly for, aided by Chain Home and the Dowding System, within two months RAF Fighter Command succeeded in chasing the Luftwaffe out of the daylight skies, shooting down 1736 German bombers and fighters at a loss of just 915 British aircraft. The Germans thus switched to making nighttime raids against civilian centres like London, Liverpool, and Coventry, kicking off the infamous Blitz.
Developed just in the nick of time, the Chain Home radars played a vital role in saving the Royal Air Force from destruction and allowing Britain to survive to fight another day. But as the first phase of the Battle of Britain drew to a close, the system began to show its major shortcomings. While Chain Home and the Dowding System could track incoming enemy aircraft and vector fighters onto an interception course, the actual interceptions had to be made visually. However, when the Germans switched to night bombing this became all but impossible, and for the first few months of the Blitz Britain’s air defences proved all but useless against the nocturnal intruders. Solving this problem would lead to one of the greatest technological breakthroughs of the entire war.
Meanwhile, the RAF’s bomber command turned the tables on the Germans and began flying regular bombing raids on Germany and occupied Europe. But just like the Germans, the British quickly discovered that bombing in daylight was suicide, and atrocious losses at the hands of Luftwaffe fighters forced them to switch to night bombing. However, this created problems of its own, as navigating to a target and dropping bombs accurately in pitch-darkness proved all but impossible. Indeed, in the early days of the bombing campaign, it was determined that over 90% of bombs fell on open countryside several kilometres from the intended target. Even worse, by this time the Germans had recognized the utility of radar and developed their own, even more sophisticated version of the Dowding System known as Himmelbett or “four-poster bed” or the Kammhuber Line after Generalmajor Josef Kammhuber, commander of Luftwaffe night fighters. Early warning of incoming enemy bombers was given by a chain of coastal radars codenamed Freya – named after the queen of the Gods in Germanic mythology – operating at a wavelength of 1.2 metres. Once the bombers had passed the coast, they were then handed over to a network of smaller radars codenamed Würzburg operating at a wavelength of 50 centimetres which could accurately track individual bombers and direct searchlights, anti-aircraft guns, and night fighters towards them. As in the Dowding system, all this was coordinated from central command rooms largely staffed by female auxiliaries known as Luftwaffe Helferin. At first, German night fighters, like their British counterparts, had to make the actual interceptions visually, but from 1942 onwards they were increasingly equipped with the 61 centimetre FuG 202 Lichtenstein airborne intercept radar, whose distinctive branching aerials mounted to the aircraft’s nose were nicknamed “mattress” or “stag’s horns.”
The technical details of the Kammhuber Line, which inflicted unsustainable losses in the early days of the night bombing campaign, were soon worked out by British Intelligence by various methods, including signals intelligence, aerial reconnaissance photographs, the breaking of the German Enigma cipher, and even a daring raid on February 27, 1942 codenamed Operation Biting, in which a team of British airborne troops parachuted into a Würzburg site in Bruneval, France to capture pieces of the radar and some of its operators – but that is a story for another video. Once the British knew how the system and its various components worked, they very quickly set about developing countermeasures. The first of these was simply to form the bomber force into a single concentrated stream and fly them through a single point in the Kammhuber Line, swamping the Würzburg radars and preventing night fighters from being accurately guided towards individual targets. However, the Germans quickly countered this tactic by deepening the line to include multiple overlapping defensive sectors and placing large radar-controlled antiaircraft batteries around major industrial targets like factories.
Undeterred, the British turned to a series of increasingly-sophisticated electronic countermeasures. The first of these, codenamed Tinsel, Jostle, and Airborne Cigar, jammed night fighters’ radios by broadcasting white noise generated by microphones placed in the attacking bombers’ engines, preventing ground controllers from guiding them to an intercept. Later, when night fighters began being equipped with Lichtenstein airborne radar, another jamming device, codenamed Piperack, was developed to counter it. Mandrel was used to jam the Freya early-warning radars, Moonshine was a transponder which made a single decoy aircraft appear on radar like an entire flight of bombers, luring German air defences away from the main force; while Perfectos triggered the night fighters’ Identification Friend or Foe or IFF transponders, revealing their positions. Rather more devious – and amusing – was Operation Corona, in which German-speaking radio operators aboard the bombers impersonated the German ground controllers, causing mass confusion among the night fighter force. After learning of this deception, the Germans switched to using female ground controllers, assuming that the RAF would not send WAAFs up in operational aircraft. But the British were already one step ahead, and had set up a team of German-speaking WAAFs at a high-powered transmitting station at Hollywood Manor in Kent. This scheme often resulted in darkly hilarious scenes, as one WAAF, Ruth Tosek, later recalled:
“…they would say, “Das ist eine feind Stimme – an enemy voice don’t listen to it! Don’t listen to it!” And we would reply, “Sir Sind die richtige Stimme – We are the real voice.” This would go on until the pilot became completely confused and didn’t know who was who.”
But the most effective Allied radar countermeasure was, ironically, the simplest. Codenamed Window, this consisted of strips of aluminium foil cut to half the wavelength of the German Würzburg radar – 26.5 centimetres – creating a half-wave dipole antenna that would radiate back a much stronger echo than a random aircraft structure. Dropped in bundles from attacking aircraft, Window completely overwhelmed the radars, rendering them useless. Indeed, so effective was Window that for months the British hesitated to deploy it, fearing that the Germans would quickly copy it and retaliate in kind. Ironically, the Germans had developed an identical weapon codenamed Düppel, but also hesitated to use it for identical reasons. However, RAF Bomber Command losses soon became so high that Prime Minister Winston Churchill ultimately gave the order to “open the window.” The first large-scale operational use of this “doomsday” weapon was during Operation Gomorrah, the July 24, 1943 firebombing raid against the port city of Hamburg. As expected, Window proved devastatingly effective, completely blinding the German defences and leaving searchlights, antiaircraft guns, and night fighters to grope uselessly around the night sky. This allowed the 746 bombers to reach the city almost unopposed and deliver their deadly payloads. The summer of 1943 had been unusually hot and dry, turning Hamburg into a tinderbox and allowing the incendiary bombs to touch off a gigantic firestorm – a self-sustaining flame tornado that generated winds up to 240 kilometres per hour, sweeping citizens off their feet like dried leaves and sucking the air out of air raid shelters. The raid destroyed 61% of Hamburg’s houses and killed 51,000 of its citizens for the loss of just 12 RAF aircraft. Indeed, afterwards Albert Speer, Hitler’s Minister of Armaments, stated that if raids of similar size had been conducted against just five more major German cities, the Allies could have won the war in 1943.
With their radar network rendered ineffective, the Germans switched to a tactic codenamed Wilde Sau or “wild boar”, in which night fighters flew above the enemy bomber streams and made their interceptions visually by spotting their targets silhouetted against the searchlights and fires on the ground below. Later, they also developed systems called Wurzlaus and Nuremberg that could distinguish between moving bombers and stationary clouds of Window or detect the specific reflections produced by spinning aircraft propellers. Night fighters were also equipped with Lichtenstein SN2 and Neptune radars operating at frequencies immune to Window and the Flensburg and Naxos devices, which allowed the fighters to home in on Allied bombers’ Monica tail warning radars and H2S ground-scanning radars. These advancements allowed the Germans to resume inflicting heavy losses against Allied bombers starting in early 1944. As a result, the Allies were forced to remove Monica from their aircraft and only turn on their radars at the last possible moment.
But if the Germans ever considered retaliating against Britain using Düppel, they would have found this countermeasure wholly ineffective thanks to a key technological breakthrough: a device capable of producing extremely short-wave centrimetric radar beams. The development of this device was initiated by the Royal Navy, which wanted a more precise radar to allow warships and patrol aircraft to detect small targets like surfaced U-boats and aim their guns in poor visibility and at long distances. In 1939, the Admiralty contacted a team at the University of Birmingham under Dr. Marcus Oliphant, who would later play a key role in the Manhattan Project that developed the atomic bomb. Oliphant assigned the task of developing centimetric radar to research fellow John Randall and postgraduate student Harry Boot, who began experimenting with a device known as a Klystron. Used in many early radar sets, the klystron was a type of amplifying vacuum tube that bounced a beam of electrons between two resonant cavities, creating an oscillation that generated high-frequency radio waves. Unfortunately, klystrons could only generate practical radar beams at wavelengths down to 50 centimetres; at shorter wavelengths the power requirements became impractically high. Randall and Boot thus abandoned the klystron and focused on another device known as a split-anode magnetron. In November 1939, the pair sketched out an unusual device they dubbed the cavity magnetron. This consisted of a solid cylindrical anode made of copper with a central cavity and six smaller resonant cavities arranged around it and connected by slots. A cathode injected electrons into the central cavity while a powerful magnet caused the electron beam to swirl around its edge past the openings to the peripheral cavities, creating resonance in a similar manner to blowing air across the mouth of a bottle and – hopefully – generating wavelengths of 10 centimetres or less.
When Randall and Boot presented their concept to Oliphant he was not overly enthusiastic, but given the discouraging lack of success with other approaches he tacitly approved their experiments. With resources almost non-existent, Randall and Boot were forced to resort to the “string and sealing wax” approach that had characterized British physics for over a hundred years. High-voltage transformers were scrounged from the Royal Navy base at Portsmouth, while an old teaching electromagnet was unearthed in one of the university labs. Other components, like the copper anode and high-voltage rectifiers, Randal and Boot manufactured themselves. Emblematic of the make-do-and-mend nature of the endeavour, the ends of the prototype were vacuum sealed using two half-penny coins and sealing wax. The methods used to test the exotic new device were equally crude; to measure its power output, Randall and Boot connected the magnetron to a series of light bulbs, increasing the power until the bulb burned out before swapping in a new one. By this method they measured the maximum output at a whopping 400 watts. To measure the emitted wavelength, they used an apparatus known as a Lecher Line – a pair of parallel wires attached to the magnetron along which a light bulb could be slid, the bulb lighting up at each half-wavelength. To the pair’s astonishment, their first prototype produced a wavelength of 9.87 centimetres – almost exactly what they were aiming for.
News of the breakthrough at the University of Birmingham spread like wildfire through the British military establishment, who immediately ordered that top priority be given to its refinement and deployment. By June 1940 the General Electric Company laboratories in Wembley, Essex had developed the NT98, the first practical, mass-producible air-cooled cavity magnetron light enough to be installed in aircraft. These early magnetrons suffered from the problem of random frequency-jumping, but this was soon solved by Dr. J. Sayers by “strapping” alternate resonant cavities with heavy copper wires. So urgent was the need for these magnetrons that the first production models were manufactured in great haste using a drilling jig made from a Colt revolver cylinder. In August 1940, with Britain under threat of Nazi invasion, a delegation of scientists known as the Tizard Mission travelled to the United States bearing Britain’s most important technical breakthroughs – including an early production cavity magnetron. This gift allowed magnetrons to be mass-produced in American factories and opened the doors for further exchanges of key military technology.
It is difficult to overstate the importance of the cavity magnetron to the course of the Second World War, with this now largely overlooked breakthrough ranking higher than more well-known technical developments like jet propulsion or the atomic bomb. Indeed, American historian James Phinney Baxter III later declared the magnetron to be “The most valuable cargo ever to reach these shores.”
In late 1941, British Bristol Beaufighter night fighter aircraft began being equipped with the Airborne Intercept or AI Mk.VII radar, which produced a narrow, 25 kilowatt beam with a wavelength of 9.1 centimetres that could accurately pinpoint enemy aircraft in pitch-darkness at a range of between 120 metres and 10 kilometres. Previous Airborne Intercept radars operating at a wavelength of 1.5 metres were plagued by random echoes from the ground below, limiting their effective range to around 5 kilometres. This technology allowed the Royal Air Force to finally sweep the Luftwaffe from British skies, bringing the dreaded Blitz to an end.
Meanwhile, centrimetric radars fitted Royal Navy and RAF Coastal Command aircraft like the Vickers Wellington, Consolidated B-24 Liberator, and Consolidated PBY Catalina allowed them to detect even the periscopes of submerged German U-boats. Previously, these aircraft were fitted with the less sophisticated Air to Surface Vessel or ASV Mk.II radar operating at a wavelength of 1.25 metres. While these sets allowed aircraft to detect a surfaced U-boat at a range of 100 kilometres, the Germans soon developed a detector called Metox that allowed them to crash-dive and escape before an attacking aircraft came into firing range. With the introduction of the magnetron-based ASV Mk.III, however, the U-boats had nowhere to hide and were forced to spend more and more time submerged, greatly reducing their effectiveness. More than any other breakthrough, the cavity magnetron helped turn the tide of the Battle of the Atlantic, the longest-fought of the entire war and the only one which Winston Churchill admitted truly frightened him.
The magnetron also played a major role in the Allied strategic bombing campaign against Germany and occupied Europe. As previously mentioned, when the British switched from daylight to night bombing early in the war, they discovered that accurately reaching and bombing specific targets using traditional methods like celestial navigation and dead reckoning was all but impossible. RAF Bomber Command thus switched from precision bombing of industrial and military targets like factories and airfields to a strategy of “Area Bombing” with incendiaries against large population centres. They also copied the pathfinder strategy pioneered by the Luftwaffe during the Blitz wherein an elite group of aircraft with highly-skilled navigators flew ahead of the main bomber force to mark the target with flares and incendiary bombs. To further increase their accuracy, the pathfinders soon adopted a variety of electronic navigation aids, another technique copied from the Germans. The first of these, codenamed Gee, was introduced operationally in 1942 and consisted of three transmitting stations designated A, B, and C spaced around 160 kilometres apart. These stations transmitted synchronized pulses such that depending on his position, an aircraft navigator would receive the pulses from one of the stations ahead of the others. If all the points where all three signals were received simultaneously were plotted, they formed long hyperbolic curves known as isochrones – similar to the isobar pressure lines on a weather map. These curves intersected with each other at certain points, forming a huge grid projected across the European continent – hence the codename “Gee” for “Grid” – against which aircraft could fix their position. The great advantage of Gee was that it was entirely passive, the receiving aircraft emitting no signals which could give away its position, and could be used by any number of aircraft at once. Also, unlike German beam systems, it gave no indication as to the target of a particular raid. Gee inspired the creation of a similar American system known as LORAN or LOng RAnge Navigation, which remained in use until the 1980s as a general maritime navigation aid for personal and commercial vessels.
However, while Gee could get a bomber to a particular city, it was not accurate enough to guide it over a specific target like a factory and had a maximum range of 250 kilometres – too short to reach the heart of Germany. Consequently, in early 1943 the British developed a more sophisticated system codenamed Oboe. Unlike Gee, Oboe was an active system, incorporating a special transceiver aboard the aircraft which received and returned signals from a pair of transmitting stations in Britain. The first, dubbed Cat, kept the bomber flying along a circular course intersecting with the target. If the pilot drifted off this course, he heard a string of Morse Code dots or dashes in his headphones as in the ubiquitous Lorenz blind landing system, prompting him to correct left or right. Meanwhile, the second station, codenamed Mouse, kept track of the bomber’s position and sent a signal when it was over the target, automatically triggering the release of bombs. The accuracy of Oboe was around 100 metres at 400 kilometres – enough to hit a factory in Germany’s industrial Ruhr area. However, it could not reach as far as Berlin and, being an active system, gave off signals that German night fighters could home in on. The British thus set about developing a self-contained navigation system whose reach would only be limited by the range of the aircraft carrying it.
This system took the form of H2S, a modified version of the Airborne Intercept radar pointed downwards, allowing ground features like coastlines, rivers and canals, and towns to be identified in pitch darkness or through thick cloud. This was accomplished using a rotating parabolic antenna caused in a streamlined radome and a circular plan position indicator with a persistent phosphor that refreshed itself with every rotation of the cursor – the classic image of a radar screen we are all familiar with today. The system was originally named BN for Blind Navigation, but was soon renamed H2S – the chemical formula of the pungent gas Hydrogen Sulphide – allegedly after a comment made by Professor Frederick Lindemann, scientific advisor to Winston Churchill:
“It was stinking because it ought to have been done years before!”
Interestingly, the RAF was initially hesitant to deploy H2S since it was feared that the Germans would capture and copy the top-secret magnetron technology. Indeed, experiments had proven that the copper anode at the heart of the magnetron was virtually indestructible and would likely survive even the most violent crash or self-destruct charge largely intact. Technicians were thus assigned to develop an alternate version using the well-known Klystron tubes, but this proved technically infeasible. In the end, the strategic benefits of deploying H2S were deemed to outweigh the risks, and the system was first deployed on a mission in January 1943 when Short Stirling and Handley Page Halifax bombers of 7 and 38 Squadrons successfully located and marked the city of Hamburg in extremely bad weather. H2S played a vital role in the latter stages of the Allied Bombing campaign in Europe, while a similar blind bombing radar called the AN/APQ-10 was used by American Boeing B-29 Superfortress bombers against targets in Japan. As predicted, the Germans did manage to capture and copy magnetrons from downed allied aircraft and produce their own centimetric airborne intercept radar – codenamed Berlin – but these did not see service before the war ended.
One of the Allies’ greatest secret weapons, the cavity magnetron made an outsized contribution to the war effort, likely shortening the conflict by several years. This is particularly impressive given that Randall and Boot’s original prototype – including the halfpenny pieces used to seal the ends – is estimated to have cost only £200 – a bargain by any measure. Other radar technology played a similarly key role in the conflict. For example, tiny radar sets known as proximity fuzes were placed inside anti-aircraft artillery shells, allowing them to explode when they passed close to a target. Such fuzes were vital in allowing the U.S. Navy to fend off Japanese Kamikaze suicide aircraft attacks and the British to defeat the German V-1 flying bomb or “doodlebug” – and for more on this incredible battle, please check out our previous video A Wingtip and a Prayer: the Insane Way British Pilots Defeated Germany’s Secret Weapon. And radar altimeters based on tail warning radar sets were used to detonate the two atomic bombs dropped on Hiroshima and Nagasaki, helping to deliver the killing blows that finally brought the war to an end.
By this point, you are probably wondering what any of this has to do with microwave ovens. Well, as you’ll recall from the very beginning of the video, microwaves are defined as electromagnetic radiation with a wavelength between one metre and one millimetre – exactly the kind of radiation generated by a cavity magnetron. But while experiments with cooking food using shortwave radiation had been conducted before the war, it took a happy – and hilarious – accident for the wartime sword of radar to be beaten into the peacetime ploughshare of everyone’s favourite kitchen appliance.
In 1945, Percy Spencer, an engineer at the Raytheon Company in Cambridge, Massachusetts, was working on the magnetron of an active radar set when he reached into his pocket for a snack. To his surprise, he discovered that his Mr. Goodbar chocolate bar had somehow melted into a sticky mess. This effect had been observed by several other technicians, but Spencer was determined to know more. He thus embarked on a series of legendary experiments, as recounted in a 1958 Reader’s Digest article:
“He sent a boy out for a package of popcorn. When he held it near a magnetron, popcorn exploded all over the lab. Next morning he brought in a kettle, cut a hole in the side and put an uncooked egg (in its shell) into the pot. Then he moved a magnetron against the hole and turned on the juice. A skeptical engineer peeked over the top of the pot just in time to catch a face-full of cooked egg. The reason? The yolk cooked faster than the outside, causing the egg to burst.”
These experiments confirmed that it was indeed the microwaves emitted by the magnetron – and not something else like radiated heat – that was heating up the food. Today, we know that microwave ovens work because the constantly-changing electromagnetic field causes molecules like water and fat with a strong dipole structure – that is, one negatively and one positively-charged end – to rotate rapidly, generating heat that is transferred to the rest of the food. This is why dry, low-fat foods like crackers don’t heat up when placed in a microwave oven. Contrary to popular belief, however, microwave ovens don’t cook food “from the inside out.” Rather, this effect occurs when there is a greater concentration of absorbent moisture inside the food than on its surface – such as in the classic hot pocket with its tongue-searing molten lava filling. Also, microwaves emerging from the magnetron bounce around randomly inside the oven and interfere with themselves, producing zones of varying power density that can cook different regions of a food item at different rates. However, this effect is usually countered through the use of a turntable to slowly rotate the food or a motorized waveguide to more evenly distribute the microwaves around the interior of the oven.
While Spencer’s supposedly serendipitous discovery has become the stuff of legend, in reality the development of the microwave oven was a gradual process involving dozens of small discoveries and developments by multiple technicians. Nonetheless, on October 8, 1945, Spencer filed US patent 2,495,429 for a Method of Treating Foodstuffs, in which he outlined basic technical details such as the optimal frequency – 2.45 Gigahertz – power and cavity size for efficient cooking. Alarmingly, the oven design in his patent featured a conveyor belt for moving food past the magnetron and open ends which would have allowed face-melting microwave radiation to leak out of the appliance! In a follow-up patent filed in 1949, Spencer described the now-ubiquitous process of microwaving popcorn, while in a 1951 patent he described how to broil a lobster in a microwave by shoving a “pencil like rod” up its behind to prevent the tail from curling and becoming harder to cook. As if lobsters didn’t have it hard enough already…
Based on the work of Spencer and others – in particular engineers Lawrence Marshall, Fritz Gross, and Marvin Bock – in 1947 Raytheon produced the first prototype microwave oven, the first of which was installed in a Boston restaurant for testing. Another prototype was incorporated into a delightfully-named “Speedy Weeny” vending machine in New York’s Grand Central Terminal which dispensed freshly-microwaved hot dogs. Later that year, Raytheon finally launched the world’s first production commercial model under the brand name Radarange. But while today we think of microwave ovens as compact, convenient countertop appliances, the first Radaranges were absolute behemoths, standing 1.8 metres tall and weighing 340 kilograms. They consumed 3 kilowatts – three times as much as a modern microwave – and produced so much waste heat that they had to be water-cooled. With an eye-watering price tag of $5,000 – $68,000 in today’s money – these units were far out of reach for the average consumer and were intended for use in restaurants, cafeterias, airliners, and ship’s galleys – with one example being installed aboard the revolutionary nuclear-powered cargo ship and passenger liner N.S. Savannah in 1961.
In 1955, Raytheon licensed its microwave technology to appliance manufacturer Tappan, who introduced the first microwave oven designed for consumer use: the RL-1. Designed to be mounted to the wall like a conventional oven, the RL-1 still cost a whopping $1,295 – $11,000 in today’s money – resulting in disappointing sales. But technology was rapidly improving, and soon more reasonably-priced models with lighter, air-cooled magnetrons began to appear on the market. In 1964 the Sharp Corporation of Japan introduced the first microwave oven with a rotating turntable to address the traditional problem of uneven heating. And at around the same time, the Litton Company developed a magnetron that could survive a no-load condition – that is, running with nothing in the oven to absorb the microwaves – making microwave ovens much safer and long-lasting. Finally, in 1967, the Amana Corporation, a subsidiary of Raytheon, introduced the countertop Radarange, the first compact microwave oven as we would recognize it today. This model took an unusual approach to promoting even food heating, using a rotating wave guide or mode stirrer to distribute the microwave beam, allowing the food to remain stationary. With a price tag of $495 – $5,000 in today’s dollars – it was still very expensive, but with simplified designs and advances in manufacturing techniques the price gradually fell and throughout the 1970s and 80s microwave ovens began to be found in more and more homes. Indeed, while only around 40,000 microwave ovens were sold in the United States in 1970, by 1975 this number had risen to nearly one million. And while only 1% of American households owned a microwave oven in 1971, by 1986 around 25% did. By 1997, this figure had risen to an astonishing 90%. With the exception of Japan, where the popularity of microwaves benefitted from simpler and cheaper to manufacture magnetron designs, adoption has been slower elsewhere in the world, with 88% of Canadian, 65% of French, 5% of Indian, 40% of Russian, 38% of South African, and 16% of Vietnamese households owning a microwave oven by the turn of the 21st century.
And as microwave ovens became increasingly popular, manufacturers began adding additional features. For example, in the early 1970s Tappan began offering hybrid models that incorporated both microwave capability and conventional oven heating elements. In 1975, Amana introduced the first automatic defrost feature with their RR-4D model, while the following year their RR-6 model incorporated the first fully digital, microprocessor-controlled control panel. Since then, microwave ovens have more or less settled around a standard design, with different brands or models differing largely in their styling, control panel layout, and number of cooking modes like defrost, popcorn etc. Indeed, as of 2020, nearly all microwave ovens sold in the United States, regardless of brand, are manufactured by a single company: the Midea Group, headquartered in Beijiao, China. Most consumer microwave ovens consume between 600-1200 Watts of power and operate at the same 2.45 Gigahertz frequency established by Percy Spencer in 1945. As many telecommunications technologies operate at microwave frequencies, in 1947 Raytheon and General Electric petitioned the Federal Communications Commission or FCC to set aside a section of the electromagnetic spectrum for use by microwave ovens, medical diathermy machines, and other specialized equipment so as not to interfere with vital communications networks. The two allocated frequencies, 915 +/- 25 Megahertz and 2450 +/-50 Megahertz, are today known as the Industrial/Scientific/Medical or ISM bands. The 915 Megahertz frequency is commonly used by industrial microwave ovens as the longer wavelength penetrates deeper into the food and raises its temperature faster, speeding up the overall cooking process.
Interestingly, no matter how many power or specialized cooking settings a microwave has, the magnetron always operates at a constant power output. Different heat settings are thus achieved through duty-cycle modulation i.e. switching the magnetron on and off for different period of time. This way, the rate and total amount of energy deposited into the food can be controlled. For example, if a microwave is set to half power, then the magnetron will be switched on only half the time. Similarly, since ice absorbs microwaves less efficiently than liquid water, the defrost setting keeps the magnetron switched on for even less of the total cooking time, allowing the food to defrost slowly without overheating and prematurely cooking it. By contrast, inverter microwaves use solid-state switching gear as opposed to the traditional transformer and electronic relay, allowing them to operate the magnetron continuously at low power. This heats the food more slowly and evenly, preventing the formation of hot spots that can prematurely cook or ruin sensitive, sugar and fat-rich foods like meat, dairy, and desserts. In addition, inverter microwaves tend to be more energy-efficient than more traditional designs.
Given their ability to cook meat and fish, it goes without saying that the microwaves produced by a magnetron are potentially dangerous to living tissue. Particularly vulnerable is the eye which, having no cooling blood vessels of its own, is prone to overheating when exposed to microwaves, potentially resulting in the formation of cataracts later in life.Thankfully, the microwaves produced by a microwave oven are fully contained by the appliance’s metallic case, which acts as a Faraday cage – named after the great English physicist Michael Faraday. Faraday cages exploit the fact that rapidly-changing electromagnetic fields cannot penetrate a closed electrical conductor, and are widely used to protect sensitive electronic equipment from external radio frequency interference or RFI or – in this case – to prevent electromagnetic radiation from getting out. But the walls of a Faraday cage need not be solid; so long as any openings are smaller than the wavelength of the radiation in question, said radiation will be effectively blocked. This is why the window in the door of most microwave ovens is covered in a fine metal mesh. Further safety is provided by a set of interlock switches that prevents the magnetron from operating while the door is open. Thanks to these features, you can watch your microwave burrito spin round and round without fear of your face melting off.
And of course, no discussion of microwave ovens would be complete without addressing that most taboo of household scientific experiments: putting metal in the microwave. The reason this is generally frowned upon is because metallic objects like forks and aluminium foil act like radio antennas, absorbing microwave energy and converting it to an electric charge. If the object in question has sharp edges, these will concentrate this electric charge, exceeding the breakdown potential of the surrounding air and throwing off lightning-like arcs called corona discharges that can potentially start a fire. Even worse, smoother metallic objects can reflect microwaves back at the magnetron, causing feedback that can ultimately overload and destroy the component, rendering the entire oven useless. Finally, absorbed microwaves can cause metallic objects to become extremely hot, posing a burn hazard. Indeed, this effect is sometimes exploited by manufacturers of microwaved food. The packaging for pizzas, pies, and other items that would normally become soggy when cooked in a microwave often incorporate a thin metallic layer that absorbs and radiates heat back onto the food, creating a deliciously crispy crust. But unless you have a special microwave set aside specifically for mad scientific experiments, please take our advice: just don’t do it.
Now, while radar sets and microwave ovens are the most common and famous applications for the magnetron, they are far from the only ones. Indeed, high-powered microwave emitters are widely used in industry for applications as diverse as softening plastic before moulding, drying potato chips, and roasting coffee beans and peanuts. And newer, more exotic applications are being discovered all the time. For example, in 2023, researchers at Australia’s Macquarie University developed a method for using microwaves to selectively decompose the silicon in expired solar panels, allowing the other component materials like glass, plastic, and metal to be safely recycled. In the traditional recycling method, solar panels are crushed, heated to 1400 degrees Celsius, and treated with harsh solvents to dissolve away the plastic components – an energy-intensive and environmentally unfriendly process. Similarly, in 2022 researchers at Mitsui Chemicals in Japan announced a method for using microwaves to decompose polyurethane foam into its component chemicals without the use of toxic solvents. Meanwhile, a team at the Korea Electrotechnology Institute or KERI led by Dr. Sunshin Jung has developed a device designed to bombard agricultural fields with microwave radiation, selectively killing pests hidden under the soil without the need for pesticides. Even more exotic, engineers at Penn State University has devised a microwave-based method that could eventually be transported to the moon and used to smelt minerals found in the lunar dust or regolith – either to extract valuable metals like titanium or build permanent strictures on the lunar surface. In the grand tradition of cheeky tech acronyms, the team dubbed their creation Smelting with Microwave Energy for Lunar Technologies System for In-Situ Resource Processing – or SMELT.
From defeating the Nazis and Imperial Japanese to making dinnertime a bit more convenient to smelting metals on the moon, the cavity magnetron has come a long way, and stands as one of the greatest stories of swords into ploughshares in modern history. Now, if you’ll excuse me, there’s a frozen burrito with my name on it…
Expand for ReferencesJohnson, Brian, The Secret War, Arrow Books, 1978
Ackerman, Evan, A Brief History of the Microwave Oven, IEEE Spectrum, September 30, 2016, https://spectrum.ieee.org/a-brief-history-of-the-microwave-oven
History of the Microwave Oven, Whirlpool, https://www.whirlpool.com/blog/kitchen/history-of-microwave.html
History of the Microwave, http://www.historyofmicrowave.com/
History of the Microwave Oven, Microwaves 101, https://www.microwaves101.com/encyclopedias/history-of-the-microwave-oven
Skolnik, Merrill, History of Radar, Encyclopedia Britannica, August 17, 2024, https://www.britannica.com/technology/radar/History-of-radar
NASA – Funded Student Team Builds Microwave System to Smelt Metal on the Moon, Penn State University, https://www.psu.edu/news/engineering/story/nasa-funded-student-team-builds-microwave-system-smelt-metal-moon/
Microwaves Heat the Soil to Eliminate Pests and Help Farmers Manage Soil Diseases, December 19, 2023, https://phys.org/news/2023-12-microwaves-soil-pests-farmers-diseases.html
Moore, Stephen, Japanese Project Aims to Recycle Polyurethane Foam Using Microwaves, Plastics Today, June 22, 2022, https://www.plasticstoday.com/advanced-recycling/japanese-project-aims-to-recycle-polyurethane-foam-using-microwaves
Patel, Prachi, The Surprising Appliance That Could Make Solar Panels Easier to Produce and Recycle, Anthropocene Magazine, April 27, 2023, https://www.anthropocenemagazine.org/2023/04/the-surprising-appliance-that-could-make-solar-panels-easier-to-produce-and-recycle/
| Share the Knowledge! |
|


