The History of Artificial Intelligence Begins Long Before 1955, With Ideas and Inventions That Anticipated the Creation of Smart Machines
Artificial Intelligence (AI) is a modern concept, but its origins date back long before 1955. In fact, even before the term was coined, there were glimpses of a future where machines could perform complex tasks that required human intelligence.
These early steps towards AI were presented in folklore, science fiction, and early technological innovations of the 19th and 20th centuries. Discoveries and ideas from inventors and thinkers of past generations were fundamental in shaping what we now understand as AI.
The First Glance at the Smart Machine
The concept of thinking machines did not emerge with the first digital computers or with the invention of AI as a scientific field.
-
Goodbye to waste: a Brazilian created a revolutionary brick that uses construction debris, has already entered the markets of the United Kingdom and the USA, and promises to shake up the global construction industry.
-
China has created a mass-produced hypersonic missile that costs the same as a Tesla, and this is changing everything in modern warfare because the United States cannot defend itself without spending millions.
-
100% Brazilian technology transforms agricultural waste into a meat-scented ingredient using fungi from the Amazon rainforest. The process does not use excessive water or chemicals, and it also increases the nutritional value of the final product.
-
Psychology reveals that adults who avoid conflicts at all costs are not balanced individuals, but rather children who learned in the worst way that expressing emotions brought punishment and now live paralyzed by the fear of expressing themselves.
In fact, it appeared satirically on the pages of literature. In 1726, Jonathan Swift, in his work Gulliver’s Travels, dynamically introduced the notion of a machine called “The Engine.”
Although the description was an ironic critique of scholars of the time, the idea of a contraption that could generate new ideas from mechanically rearranged words already foreshadowed what would later become algorithmic text generation.
Swift mocked the pretensions of academics, but his vision was surprisingly futuristic, foreshadowing developments that, centuries later, would be made possible by the advancement of AI.
The Chess Automaton of Leonardo Torres and Quevedo
As early as the 20th century, Spanish engineer Leonardo Torres and Quevedo developed an automaton called El Ajedrecista (The Chess Player).
In 1912, he created a machine capable of playing a simplified end game of chess, with a king and a rook against a king.
The machine used electromagnets to move the pieces and could identify illegal moves, as well as being capable of checkmating when it was in a winning position.
Although it was a mechanical machine, El Ajedrecista demonstrated the possibility of replicating behaviors that require logical calculation, an essential principle for modern AI.
The Emergence of Computing
The 1940s marked a turning point in the path to the creation of modern AI. With the development of digital electronic computers, new possibilities for intelligent machines began to open up.
The Atanasoff-Berry Computer (ABC), created by John Vincent Atanasoff and Clifford Berry, was one of the first digital computers and innovated in binary computing and electronic circuits — the essential foundations for the development of AI programs.
In 1943, scientists Warren McCulloch and Walter Pitts proposed a mathematical model of the human brain, revealing that neurons and synapses could be modeled as computational networks.
This idea of neural networks would inspire decades of research and become an important field within AI, resurfacing strongly in the 21st century.
Alan Turing
No figure is more central to the history of AI than Alan Turing. Born in 1912, Turing was a British mathematician whose work laid the foundations for modern computer science.
In 1950, he published his paper “Computing Machinery and Intelligence,” where he raised a fundamental question: “Can machines think?”
Turing did not delve into philosophical discussions about what it means to “think,” but proposed a practical approach: the Turing Test.
In this test, a human judge interacts with two interlocutors — one human and one machine — through a text chat.
If the judge cannot distinguish which is which, the machine can be considered “intelligent.” The Turing Test became a milestone in AI evaluation and remains relevant to this day.
Turing’s work also had a profound impact on how we understand computing. He suggested that computers, if programmed correctly, could simulate any mental process, as long as there were proper instructions.
Unfortunately, Turing passed away in 1954, without seeing the term “AI” formally adopted, but his contribution to the field is undeniable.
John McCarthy
In 1955, mathematician and computer scientist John McCarthy, along with other researchers, specified the Dartmouth Summer Research Project on Artificial Intelligence, an event that is often considered the starting point of AI as a formal research field.
During this event, McCarthy introduced the term “artificial intelligence,” which would become the foundation for all future research in the area.
McCarthy also developed Lisp, a programming language fundamental to AI, which would become the primary tool for researchers for many decades.
Additionally, he believed that machines could reason logically and that formal logic would be the key to replicating human intelligence. His vision profoundly influenced the early development of Artificial Intelligence.
Marvin Minsky
Marvin Minsky, one of the organizers of the Dartmouth workshop, was a pioneer who made significant contributions to AI.
He was one of the founders of the MIT Artificial Intelligence Laboratory, which would become a center of excellence in AI development.
Minsky also led the creation of the The Society of Mind, a theory that suggested that intelligence is composed of a set of specialized agents that operate cooperatively.
Minsky also worked on the development of artificial neural networks, but in his critique of Frank Rosenblatt’s Perceptron, he highlighted that simple neural network models had limitations in more complex tasks.
Despite this, his contributions profoundly shaped the field by incorporating psychology and cognitive science into AI studies.
Herbert A. Simon and Allen Newell
While McCarthy and Minsky focused on formal logic and neural networks, Herbert A. Simon and Allen Newell approached AI from a cognitive perspective.
In 1956, Simon and Newell implemented the Logic Theorist, considered the first functional program of Intelligence, which autonomously proved mathematical theorems.
This was an important breakthrough, as it demonstrated that machines could solve complex problems as humans face in logical reasoning.
In 1957, they implemented the General Problem Solver (GPS), an attempt to simulate how humans solve problems in general, using heuristics to guide the search for solutions.
The idea that machines could reason and solve problems like humans became a central concept in AI for decades.
Arthur Samuel
In the 1950s, Arthur Samuel from IBM made one of the first major contributions to what we now call machine learning.
He created a checkers program that learned from its own games. With each game, the program improved its skills, adjusting its performance based on the feedback it received, a principle that would become central in machine learning.
His work was a milestone in the evolution of AI, as it showed that machines could learn and improve without direct human intervention.
Frank Rosenblatt
Frank Rosenblatt was another key figure in the history of AI, mainly for the development of the Perceptron, a type of artificial neural network.
Created in 1957, the Perceptron was capable of learning to classify objects based on labeled examples.
Although limited to linear problems, the Perceptron was the precursor to modern neural networks, and its ideas were fundamental for the revival of AI decades later when more advanced neural networks evolved to be developed.
Joseph Weizenbaum and the ELIZA Chatbot
In the 1960s, Joseph Weizenbaum developed the ELIZA program, a chatbot that simulated human conversations based on a simple set of pattern-matching rules.
Although simple, the program demonstrated the possibility of convincing human-computer interaction.
ELIZA also raised ethical questions about interaction with machines, as many users had the idea of feelings and empathy towards the program, something unexpected for Weizenbaum.
The Winter of Artificial Intelligence
After the initial optimism, AI research went through periods of disillusionment, known as “AI winters.”
In the 1970s and 1980s, unfulfilled promises and the limitations of AI systems of the time led to a significant slowdown.
However, the ideas of pioneers like Turing, McCarthy, and Minsky supported the influence on subsequent generations of investigators.
Advancements in the 21st Century
Starting in the 2000s, with the increase in computer processing power and the emergence of new algorithms, AI began to develop at an impressive pace.
Deep learning, an advanced type of neural network, started to show its potential in areas such as computer vision, speech recognition, and natural language processing.
In 2012, the victory of AlexNet in ImageNet was an important milestone, symbolizing the resurgence of the AI field.
Today, AI is present in various areas of our daily lives, from personal assistants to self-driving cars.
Innovations are rapid, but we cannot forget that modern AI was built on the shoulders of giants who came before it.
Turing, McCarthy, Minsky, Rosenblatt, and their colleagues established the foundations that allowed us to reach where we are today.
The history of AI, filled with challenges and discoveries, continues to unfold, and its impact on the modern world is undeniable.
With information from Interesting Engineering.

Seja o primeiro a reagir!