First computer-assisted weather forecast

First computer-assisted weather forecast
WHO
John von Neumann, Jule Charney, Arnt Eliassen, Ragnar Fjørtoft
WHAT
First
WHERE
United States
WHEN
01 November 1950

The first computer-assisted weather forecast was made using ENIAC, the first digital programmable computer, at the Aberdeen Proving Ground in Maryland, USA, in 1950. The program – developed by Jule Charney (USA), Arnt Eliassen (NOR), Ragnar Fjørtoft (NOR) and John von Neumann (USA, b. HUN) – used a simplified version of the seven "primitive equations" (a basic set of equations needed to calculate atmospheric changes) to model the progression of atmospheric pressure shifts in North America over 24 hours. The work was carried out in the spring of 1950, and published as "Numerical Integration of the Barotropic Vorticity Equation" on 1 November 1950.

In the early 1940s, John von Neumann was one of a handful of researchers – along with Alan Turing in the UK, Konrad Zuse in Germany, and John Mauchly & J. Presper Eckert at the nearby University of Pennsylvania – who were working on the concept we'd today recognize as a computer. While he was roughing out the logical organization of what is now called "von Neumann Architecture", von Neumann spoke to scientists from various disciplines about potential uses for an electronic digital computer.

The meteorologists he spoke to were extremely enthusiastic about the possibilities of von Neuman's computer, which offered the promise of a breakthrough in a discipline that had effectively hit a wall a decade or so earlier. The basic equations and underlying theory behind what is known as numerical weather prediction (the use of mathematical formulae to extrapolate future changes from existing data) was mostly understood by the 1930s, but the proposed processes involved such a vast amount of manual calculation as to be almost impossible to test.

Famously, the pioneering mathematician Lewis Fry Richardson had spent 12 years in the 1910s and 1920s laboriously working out a half-day forecast for northern Europe on a day in 1910, only to find that his final results were completely wrong. He estimated that calculating a forecast quickly enough for it to be finished in a useful time frame would require a workforce of around 64,000 human "computers".

In 1947, von Neumann's wife Klara and his colleague Nick Metropolis overhauled the now two-year-old ENIAC computer, which was by then housed in a US Army research facility in Aberdeen, Maryland. Their upgrades gave it a small amount of digital memory, and the ability to run programs stored in that memory – the original version had to be manually configured for each computing task. With these upgrades, von Neumann's team had access to a computer capable of doing the work of Richardson's hypothetical army of mathematicians.

Charney brought on board two scientists, Arnt Eliassen and Ragnar Fjørtoft, from the Norwegian Meteorological Institute to help with the development of the program. Together they worked to simplify Richardson's equations (as ENIAC's memory was too small to handle the full complexity of his technique) and, importantly, they added a filtering method to eliminate the "meteorological noise" (short term fluctuations in the source data) that had thrown off Richardson's earlier manual attempt at numerical prediction.

In 1949, the team had decided to attempt to model the shifts in air pressure in North America over a period of 24 hours, restricting themselves to the changes on a two-dimensional plane at the height of the stratosphere. This was a technical demonstration, rather than a practically useful type of forecast. The computer's limited memory also meant that the forecast's resolution (the size of each "cell" of calculations) was around 540,000 km2 – modern models use cells as small as 9 km2.

The work of programming ENIAC to carry out these calculations was done by Norma Gilbarg, Ellen-Kristine Eliassen, and Margaret Smagorinsky. They manually checked the sequence of calculations the computer would have to run, working them out by hand, and then figured out how to transform them into instructions in ENIAC's machine code. Once this was done, the instructions – as well as the initial input data – were individually marked onto around 100,000 punch cards and fed into the computer.

Four sets of data were used for the experiment, giving 24-hour forecasts for the 5th, 30th and 31st January, as well as the 13th February in 1949. In theory, each forecast would involve around 250,000 calculations and take roughly 30 minutes to run. In practice, the process of loading the punch cards, plus time for various mechanical breakdowns (ENIAC used 17,468 individual vacuum tubes, and they failed frequently), meant that the whole process, including all four sets of data, took around 24 hours.

As with Richardson's earlier forecast, the actual quality of the predictions was poor, but it demonstrated that the idea was viable. A numerical predication had been generated, and in his 1950 paper on the process, Charney excitedly announced that "with a thorough routinization of the operations, a 24-hour prediction could be made on the ENIAC in as little as 12 hours".