1. Camera Obscura: 5th century B.C.

Long before there was the camera, there was the camera obscura. Literally translated as “dark chamber,” these devices consisted of darkened rooms or enclosed boxes with a tiny opening on one side. When sunlight passed through this “pinhole” and into the chamber, it projected a hazy picture of the outside world onto a wall or screen. This optical phenomenon was almost certainly known to the ancients—both Aristotle and the Chinese philosopher Mozi mentioned it—but a full account of how it worked didn’t arrive until the 11th century, when the Arab scholar Alhazen described a working model. The camera obscura later became a popular tool during the Middle Ages and the Renaissance, particularly after inventors began using biconvex lenses to brighten its images. Astronomers used it to protect their eyes while observing the sun and solar eclipses, and artists employed it as an aid in portraiture and landscape painting.

2. Photochemistry: 18th and 19th centuries.

While the camera obscura allowed for the viewing of images in real time, several centuries passed before inventors stumbled upon a method for permanently preserving them using chemicals. A major breakthrough came in 1725, when the German professor Johann Heinrich Schulze found that silver salts darkened when exposed to light. Fascinated, Schulze cut the letters out of a piece of paper and placed it on top of a silver mixture. “Before long,” he recounted, “I found that the sun’s rays…wrote the words and sentences so accurately or distinctly on the chalk sediment, that many people…were led to attribute the result to all kinds of artifices.” Others later built on Schulze’s research, and in 1827, a French inventor named Joseph Nicéphore Niépce used a camera obscura and a pewter plate coated with a light-sensitive material called Bitumen of Judea to capture and “fix” an image. His eight-hour-long exposure of the courtyard of his home is now considered the world’s first photograph.

3. Daguerreotype: 1837

Photography’s next giant leap came courtesy of Louis Daguerre, a French artist and inventor who partnered with Niépce in the late 1820s. In 1837, Daguerre discovered that exposing iodized silver plates to light left behind a faint image that could be developed using mercury fumes. The new technique not only produced a sharper and more refined picture, but it also cut the exposure time down from several hours to around 10 or 20 minutes. Daguerre christened his new process the “Daguerreotype,” and in 1839, he agreed to make it public in exchange for a pension from the French government. After some tweaking to shorten the exposure process to less than a minute, his invention swept across the world and gave rise to a booming portrait industry, particularly in the United States.

4. Calotype: 1841

Around the same time that “Daguerreotypomania” was taking hold, the British inventor William Henry Fox Talbot unveiled his own photographic process called the “Calotype.” This method traded the Daguerreotype’s metal plates for sheets of high-quality photosensitive paper. When exposed to light, the paper produced a latent image that could be developed and preserved by rinsing it with hyposulphite. The results were slightly fuzzier than Daguerreotypes, but they offered one key advantage: ease of reproduction. Unlike Daguerreotypes, which only made one-off images, the Calotype allowed photographers to produce endless copies of a picture from a single negative. This process would later become one of the basic principles of photography.

5. The Wet-Collodion Process: 1851

Daguerreotypes and Calotypes were both rendered obsolete in 1851, after a sculptor named Frederick Scott Archer pioneered a new photographic method that combined crisp image quality with negatives that could be easily copied. Archer’s secret was a chemical called collodion, a medical dressing that also proved highly effective as a means for coating light-sensitive solutions onto glass plates. While these “wet plates” reduced exposure times to only a few seconds, using them was often quite the chore. The plates had to be exposed and processed before the collodion mixture dried and hardened, so photographers were forced to travel with portable darkroom tents or wagons if they wanted to take pictures in the field. Despite this drawback, the wet-collodion process’s unparalleled quality and cheap cost made it an instant success. One of its most famous practitioners was Mathew Brady, who used wet plates to produce thousands of stunning battlefield photos during the Civil War.

6. Dry Plates: 1871-1878

For most of the 1800s, the panoply of noxious solutions and mixtures involved in using a camera made photography difficult for anyone without a working knowledge of chemistry. That finally changed in the 1870s, when Robert L. Maddox and others perfected a new type of photographic plate that preserved silver salts in gelatin. Since they retained their light-sensitivity for long periods of time, these “dry” plates could be prepackaged and mass-produced, freeing photographers from the annoying task of prepping and developing their own wet plates on the fly. Dry plates also offered much quicker exposures, allowing cameras to more clearly capture moving objects. In the 1880s, photographer Eadweard Muybridge used dry plate cameras to conduct a series of famous studies of humans and animals in motion. His experiments have since been cited as a crucial step in the development of cinema.

7. Flexible Roll Film: 1884-1889

Photography didn’t truly become accessible to amateurs until the mid-1880s, when inventor George Eastman began producing film on rolls. Film was more lightweight and resilient than clunky glass plates, and the use of a roll allowed photographers to take multiple pictures in quick succession. In 1888, Eastman used flexible film as the primary selling point of his first Kodak camera, a small, 100-exposure model that customers could use and then send back to the manufacturer to have their photos developed. Eastman’s camera was remarkably easy to use—he marketed it to Victorian shutterbugs under the slogan “You press the button, we do the rest”—but its coated paper film produced fairly low quality photos. Film would improve by leaps and bounds with the introduction of celluloid a year later, and remained the standard means of photography for nearly a century until the advent of digital cameras.

8. Autochrome: 1907

The yearning for color photography was practically as old as the medium itself itself, but a viable method didn’t arrive until 1907. That was the year the French brothers Louis and Auguste Lumière—perhaps better known as early pioneers of cinema—began marketing an additive color process they dubbed “Autochrome.” The Lumieres found the key to their invention in a most unlikely place: the potato. By adding tiny grains of dyed potato starch to a panchromatic emulsion, they were able to produce vivid, painterly images that put all past attempts at color to shame. Autochrome would reign as the world’s most popular color film technique until 1935, when a more sophisticated color process arrived in the form of the Eastman Kodak Company’s legendary Kodachrome film.