The Rise of Artificial Intelligence

by Angela Toews

Posted on Tuesday May 30, 2017 at 10:00 AM


Interest and investment in Artificial Intelligence and data science has reached a frenzy. Advancements in machine learning techniques such as deep learning have lead to numerous successful applications of Ai throughout the economy. Today it's clear that deep neural nets outperform many existing techniques, even outperforming humans in traditionally difficult problems such as speech recognition, natural language processing, customer relationship management and image recognition. It was not always like this, however, and at one point scientists and researchers had all but abandoned the field.

Let's take a look at the history of Ai and its turbulent rise to prominence.

Ancient History

Fig1. - The myth of Hephaestus.


The beginning of AI can be dated all the way back to greek mythology, the myth of Hephaestus, a blacksmith who created mechanical servants. In the 5th century B.C. Aristotle began laying the foundation for AI by inventing syllogistic logic, the first formal deductive reasoning system. In the 15th Century printing using moveable typing and functioning clocks were invented. In the 16th Century clockmakers extended their craft to creating mechanical animals and other novelties. Soon to follow in the 1640's Pascal created the first mechanical digital calculating machine.

In the 18th Century we saw the profusion of mechanical toys. After that Charles Babbage & Ada Byron worked on programmable mechanical calculating machines in the 19th Century. In 1943 Arturo Rosenblueth, Norbert Wiener and Julian Bigelow claimed the term "cybernetics". Wiener's popular book by that name was published in 1948. Many books outlining a prescient vision of the future in which computers assist humans in many activities, testing of intelligent behavior and laws of robotics followed.


Modern History

Fig 2. - John McCarthy and the Summer Research Team of 1955.


John McCarthy lead a Summer Research Project at Dartmouth College in 1955. A group of professors & students proposed that they would attempt to find how machines use language, abstractions and concepts to solve problems, normally reserved for humans.

Following in 1959 Arthur Samuel or better known as Samuel Checkers developed the first self learning AI program. He thought that teaching computers to play games was important to develop tactics for general problem solving, and he chose checkers because it is a simple game, but requires an in depth strategy. It was an early demonstration of the concept of artificial intelligence. His system was the first to use min-max search, alpha-beta pruning, truncated search, evolution functions and machine learning.

In 1961 James Slagle from MIT wrote the first symbolic integration program called SAINT, which solved calculus problems at the college freshman level. In the same year Unimate developed the first Industrial robot. It was a General Motors assembly line worker that transported auto bodies. It was later implemented in other companies as a welding robot. In 1963 Thomas Evans also from MIT, developed a program called ANALOGY, demonstrating that computers can solve the same analogy problems as are given on IQ tests. 1965 Joseph Weizenbaum from MIT built ELIZA, an interactive program that carries on a dialogue in English on any topic. It became a very popular toy for children around the world.


Fig 3. - "Shakey" and his Creators.



In 1969 a robot called SHAKEY, demonstrated combining locomotion, perception and problem solving. And in 1970 Jaime Carbonell Sr. developed SCHOLAR, an interactive program for computer-aided instruction based on semantic nets as the representation of knowledge. 1971 Terry Winograd started the classical AI known to us - developing an AI-based framework for understanding natural language processing. He introduced the idea of a world modelled by blocks or abstract symbols which could be reasoned about using logic. In Japan, at Waseda University the world's first full-scale intelligent humanoid robot was completed in 1972 called WABOT.

At the same time, in 1970s, it was becoming more obvious that researchers had underestimated the difficulty of project proposed for AI. Many agencies funding these projects became skeptical and added pressure to the researchers to develop more applicable and efficient technologies. But researchers were facing an unsolvable amount of problems: limited computing power, programs lacking commonsense knowledge and reasoning, problems that can only be solved in exponential time frames, etc. So the ideas surrounding AI became more and more critiqued, it's known as the AI WINTER.


Fig 4. New discoveries in weather predictions and GPS tracking.


In the 1980s a different form of AI program called EXPERT SYSTEMS surfaced by corporations around the world and it guided the mainstream of AI research. In Japan the government was actively funding AI projects to gain advanced insights. At the same time the revival of connectionism or neural networks took place through John Hopfield and David Rumelhart. Although neural networks had been introduced as early as 1943, and backpropagation (efficient training algorithms) was invented in 1969, the hardware implemented was to slow. Neural networks could now be trained as an associative memory and they could use a systematic method to train multilayer networks.

But as history always repeats itself, the market for AI collapsed in the late 1980s and the goals in the AI Industry were never reached. Again AI researchers had predicted the imminent arrival of strong AI and again they had been mistaken. Now researchers were frightened and did not attempt to make any forms of predictions or promises they might not be able to keep.

Then came the 90’s and took us on a discovery rollercoaster from probabilistic graphical models (bayesian networks) to machine learning (decision trees, support vector machines, boosting, gaussian processes). Finally some of the goals were being reached. AI became successful, even though somewhat behind the scenes. Researchers started gaining confidence as their problems were being solved in front of their eyes: more computing power, focusing on specific isolated problems and pursuing scientific accountability.


Fig 5. - Improvements in voice recognition and interactive programs.


In the most recent years, progress has been made to probabilistic relational models, deep networks, active learning and structured prediction. Neural networks are used to model problems and compare the input variable to the output variable - prediction models.

We have encountered and gathered enormous amounts of data and managed to develop sophisticated reasoning methods as well as complicated machine learning algorithms and massive computing power. From it we have developed automated driving, intelligent houses, behavioural ad targeting, fraud and safety breach detection, stock market prediction, robots learning by imitation, heart monitoring, security and military guidance, gps tracking, legal solutions, ..


Fig 6. - 2015 Japanese Humanoid Robot "PEPPER".


What else will we discover?


Ai is going to be a part of everything. This explains why everyone is trying to come up with a way to integrate AI/machine learning into their business models: chatbots, live event monetization, virtual sales reps, cognitive commerce and content management, real time analysis of pricing, sentiment analysis, recommendation engines, etc.

For the future Ai will take on even more sophisticated roles. Adding value by making intelligent suggestions about health monitoring, advice on optimal outcomes for hard to handle situations, orchestrating and multitasking jobs and simplifying the day to day.

It’s easy to fall behind, as the tech industry seems to be growing at an uncontrollable pace. A Business Insider article shows the stunning potential for growth in this space, stating. “Research firm Markets and Markets estimates that the AI market will grow to $70 billion dollars by 2020.” As of 2017 it is estimated at $15 Billion.

Ai will likely be the new face of your business - it can set you apart from your competition, or possibly jeopardize business as you know it.

The Information for this Article was gathered from personal experience as well as online research. Check out these reference materials: The History of Artificial Intelligence, Brief History of Ai and Artificial Intelligence and Bias.


Not sure how Artificial Intelligence can help your organisation?

Contact us today.