When you hear the phrase "artificial intelligence", you might picture sentient robot overlords terrorizing the human race. That image is courtesy of Hollywood, of course. We probably don't need to tell you how extreme and inaccurate it is.
Artificially intelligent entities are bots and computer programs created to perform complex tasks typically entrusted to humans. Perhaps it's a little unfair to call it "artificial" intelligence — but that's a topic for another post. Instead, let's dive a little deeper into the history of artificial intelligence.
The invention of artificial intelligence
Who invented artificial intelligence? That depends on who you ask.
AI seems like a modern concept, a 21st-century invention. But you might be surprised to learn that ancient Greek myths told grand tales of robots and artificially intelligent beings.
Hephaestus, the Greek god of invention, crafted a bronze robot named Talos for Zeus. Talos' job was to protect Crete from intruders. How do we know he was a robot? Because, according to one myth dating back to the 3rd century B.C., he had bolts in his ankles. And, when he was defeated by a sorceress, he didn't bleed like humans.
Pandora, from the famous Pandora's box myth, is also thought to be artificially intelligent. This is a fitting conclusion. Compare the Pandora's box story to modern-day Hollywood's perception of "evil" robots who hate humankind and make it their mission to destroy civilization.
But that's enough of the history lesson. If you'd like to learn more about AI's roots in ancient mythology, check out the work of Stanford researcher Adrienne Mayor. Now, let's move on to the origins of AI in the 20th century.
Origins of AI in the 20th century
The origins of artificial intelligence as we know it today date back to the 1930s. Neuroscientists researching the electrical properties of the brain pondered the possibilities of creating an artificial electronic brain.
A neuroscientist named William Grey Walter designed one of the world's first robots in the 1940s. These little robots — aptly named turtles after their slow movement and appearance — illustrated how the brain's electrical connections powered behavior. The turtles were capable of following light and could reach their charging port without any outside interference.
Who invented artificial intelligence?
Alan Turing is considered one of the founding fathers of AI. But computer scientist John McCarthy actually coined the term in 1956, two years after Turing's tragic death.
In a 1951 research paper, Alan Turing boldly explored the possibility of creating machines capable of human-like thought. But Turing and other experts of that time faced a critical roadblock: how do you define human thought?
(We're not going to touch that question with a 10-foot pole.)
Shortly afterward, the Turing Test was born. This test gauges a person's ability to distinguish a human from a machine based on their responses to questions.
The Turing Test involves three "individuals": a judge and two interviewees. One interviewee is a human and the other is a computer. Both interviewees are unseen. If the judge isn't able to determine which interviewee is the computer and which is the human, the computer passes the Turing Test. This means it's demonstrated a capacity for human-like thought.
When was artificial intelligence officially established?
Artificial intelligence became a field of study in 1955. A few years prior, in 1951, Christopher Strachey and Dietrich Prinz wrote the first two primitive AI programs that ran on a machine at the University of Manchester. What did those programs do? Played chess and checkers.
IBM's Arthur Samuel wasn't far behind with his own AI gaming program in 1952. He continued to refine it until it learned how to play on its own in 1955.
Shortly after McCarthy coined the term "artificial intelligence" in 1956, computer scientists at the Carnegie Institute of Technology demonstrated their program, the Logic Theorist. The Logic Theorist is widely considered the true "first" AI program, but Arthur Samuel's program is a strong contender.
Has artificial intelligence always been a popular field of study?
With today's emphasis on technology, it's easy to see why people think AI was an instant hit. And it was, at first. But sadly, AI experienced something of a "nuclear winter" during the 1970s and 80s.
AI in the 1970s
Many claim the publication of a book titled Perceptrons in 1969 caused this "AI winter". In a nutshell, Perceptrons pointed out the limitations of 1960s-era artificial neural networks called perceptrons. The controversial book also included bleak predictions about the perceptron's potential.
Following the book's release, funding cuts effectively ended research on perceptrons. AI fell out of favor. Five years later, in 1974, the Lighthill Report further criticized and questioned AI's usefulness as a field of science.
AI in the 1980s
The work of prolific computer scientists briefly revived AI in the 1980s. The sector enjoyed a boom thanks to an "expert system" developed in the early 1980s called Xcon.
Expert systems were designed to reduce human error. Xcon automatically selected the correct products for customers of the Digital Equipment Corporation, outperforming ill-educated salespeople. The system ran on two subprograms: an inference engine and a knowledge base. Using the data stored in the knowledge base, the inference engine would make an educated decision. Xcon and similar expert systems operated on the LISP programming language.
An overnight success, Xcon inspired businesses all over the world to use and develop their own expert systems. AI reclaimed its fame and blossomed into a billion-dollar industry.
Around the same time, Japan began working on the Fifth Generation Project. Its ambitious goals included translation, image interpretation, and human-like reasoning. The Alvey Programme was the UK's response to Japan's Fifth Generation Project.
But tragically, in 1987, the industry crashed and burned due to significant funding cuts as a result of better alternatives to LISP programs. Production of expensive expert systems slowed. Hundreds of companies went under, and AI experienced another winter that lasted through the mid-90s.
Artificial intelligence in the 21st century
Okay, so how did we go from the struggling field of the 1980s to a multibillion-dollar industry that invented chatbots, automated voice assistants, and robot surgeons? Despite the second AI winter, computer scientists continued to make groundbreaking discoveries in machine learning, reasoning, data mining, and natural language processing.
The early '90s saw the invention of a backgammon program skilled enough to compete with the world's best players. In 1992, NASA launched a robot to explore the frigid Antarctic sea. In 1993, MIT began a project to create a robotic child. And in 1997, IBM's famous Deep Blue program made headlines when it defeated the chess world champion.
This is just a small sample of the breakthroughs of the 1990s. And the rest, as they say, is history.
A brief history of artificial intelligence: wrapping up
Is there such a thing as a "brief" history of artificial intelligence? It's safe to say the answer is no. We've truly only scratched the surface here. And despite the recent breakthroughs in artificial intelligence, its origins date back thousands of years. Our ancient, overactive imaginations may have even predicted the rise of AI.
Now that we've got a good grasp on the history of AI, it's time to think about where we are now and where we're headed next. Stay tuned for our next post, where we'll cover some of the hottest, up-to-the-minute advancements in AI technology.