The Story of AI

Our imagination has no boundaries. When we have learned more about the world, our imaginations become wilder. Could we eventually create machines that can think and behave like us? The question had long been fascinating to not only scientists but almost everyone.

The thought of human-like machines has been exhilarating to filmmakers as well. In fact, there have been many dazzling films about what would happen if machines behave like us. In particular, movies featuring battles between machines and humans have broken many box office records. In 1970, the film, Colossus: The Forbin Project, portrayed a superintelligence taking logical steps to rule humans for their own good. The Terminator pictured a war between humans and the superintelligence ‘Skynet’ which wanted to destroy the human race. The Matrix in 1999, I, Robot in 2004, Her in 2013, Avengers: Age of Ultron in 2015 are all movies about intelligent machines that have captured our imaginations. Just as people were fascinated by what would happen if there exists other intelligent life like us, scientists have been exploring ways to bring the fantasy to reality.

An AI robot from the movie the Matrix Credit: Warner Bros

The early approach

Dating back to 1950’s, a British mathematician, Alan Turing, tried to address the question “Can machines think?” in his seminal paper. He first proposed a game involving initially no machines but a man and a woman in a room separated from the player. The player could not see the man and woman and could only ask questions other than their gender. The player had to guess their gender from their answers, yet one of them was to trick the player with wrong answers while the other assisted the player. To test whether machines could act like a human, the person who tricked the player was then replaced by a machine. If the player could not distinguish the machine from the replaced real person, the machine won. The machine is said to have passed the Turing Test if it is able to act like a human being. The Turing Test not only set a monumental finish line for computer scientists but also induced vast interests in exploring how machines could replicate the reasoning process and learning ability of humans. For instance, two computer scientists, Allen Newell and Herbert Simon had developed Logic Theorist, a computer program demonstrating proofs for more than half of the theorems in Whitehead and Russell’s Principia Mathematica. It was therefore regarded as the first ever program that showed machines could mimic the problem-solving skills of a human being. An American pioneer, John McCarthy invented a programming language, Lisp, specifically for AI.

It was later incorporated into a program that could solve calculus problems at a college level. The early advancements had fulfilled some of the public’s fantasies and stirred up immense interest. The promising performance of AI in simple tasks boosted confidence in the scientific community and many of them were so optimistic that they made bold claims. Simon had publically stated that the ability of machines would increase rapidly that they could handle problems like the human mind in a visible future. Minsky also made the claim that the problems in creating “artificial intelligence” would be substantially solved within a generation.

In generalizing the early successful AI systems to tackle complex problems, these early machines failed miserably. It was perhaps the high hopes of AI that propelled its development, however, it was also the failures to meet the expectations that had stalled its advancement as demonstrated by the following two stories.

The American story

In the 1950s, Georgetown IBM had created a system that could translate Russian to English with six rules and 250 vocabularies specialized in organic chemistry. Yet, its success was overstated by the media. The headline in New York Times, “Russian is turned into English by a fast electronic translator” captured the people’s attention. The US government had high expectations towards the potential power of machine translation in winning their political tensions with Russians and invested a great deal in machine translation. The leader of the experiment, Leon Dostert, was so optimistic about the project that had claimed they could accomplish interlingual conversion in three to five years.

Yet, the road to design machine translation systems that are suitable for general use was full of roadblocks. The inputs from the general public were hard to predict, and this resulted in misleading and senseless outputs. Correcting for the spurious outputs often required so much human intervention that people would rather translate manually”. This failure rang the alarm bell of the US government and they called the Automatic Language Processing Advisory Committee (ALPAC) to assess the progress of AI research. The ALAPC published a report drawing doubts on its real economic return, and eventually cut down funding to the research.
British views

IBM RAMAC 305, Credit: IBM Archives

In 1973, The British Science Research Council asked James Lighthill from Cambridge University to evaluate the progress in AI research. He thought that the research on advancement in automation and central nervous systems were useful, but the research on their connection was only worthwhile if it could facilitate either research field. In his opinions, AI contributed little to both fields and therefore was not worth proceeding with. In addressing specific problems, he stated that conventional techniques could be more successful than AI methods. He also claimed there was a “combinatorial problem” with the development of AI at the time. The computation involving more than one variables take enormous time and he thought it was impractical to have a machine that could tackle complex problems that involve multiple variables. Therefore, he concluded that existing AI techniques could work well only in small tasks, but were not scalable to tackle real problems.

Everyone wanted to see the impact of AI research, or at least as a foreseeable prospect. Yet, under-delivering the extravagant promises brought great disappointment to the government, companies and even scientific community. The frustration in realizing the big dream caused retrenchment and brought AI into a dark cold winter.

Bringing AI back to life

Instead of aiming for AI research, scientists shifted their focus to other disciplines. Yet, AI benefited indirectly from the achievements in other subdisciplines of computational science. This was the era where computational fields such as neural network, machine learning, had brought unprecedented surprises.

In particular, a physicist, John Hopfield analyzed the storage and optimization properties of neural networks with statistical mechanical concepts from physics, which brought life to neural network development again. The artificial neural network was inspired by the human brain. It consists of interconnected nits for computation, like the neurons in the human brain that are responsible for signal transmission. The strength of the transmitted signal was calibrated by the weights associated with the connections of the processing unit. These weights were adjusted from past inputs. The network could adjust its weights in the connections of processing units, therefore teaching itself.

Computational scientists in the field of machine learning have also brought immense excitement to the public. Machines are now able to identify visual images and videos that were once considered one of the greatest obstacles in AI. The many advancements in different subdisciplines have indirectly brought AI back to life.

Share This Science News

Facebook
Twitter
LinkedIn

more insights