How does an A.I. learn?
-Delta rule and Back-propagation
If you go to an army,
whether you like it or not,
you will get a chance to shoot.
Even an ace marksman has to always go through
a process called “zero in” before shooting.
This is how zero in works.
First, you put three shots on the target.
Then, adjust the impact point and the zero point, and shoot.
The delta rule in the neural network is like this zero in process.
The delta rule comes from the ADALINE model.
ADALINE model is an improved model of the perceptron algorithm
by adding the concept of delta rule.
However, the delta rule could not be applied to the multi-layer perceptron.
Earlier, Marvin Minsky stated that learning in multi layer perceptron
is impossible, as the calculations and applications are too complicated.
Thus, like I told you before,
it seemed like the history of A.I. vanished into the cold winter air.
It seemed like the history of Artificial Intelligence vanished into the cold winter air..
uring the cold weather, some pioneers were preparing for spring.
In 1969, Yu-Chi Ho and Arthur E Bryson from the field of Control Meteorology,
came with the back-propagation algorithm.
In 1974, Paul Werbos at Harvard University wrote a doctoral dissertation on backpropagation algorithm
that enables learning in multi layer perceptron.
However, he could not publish the paper,
considering the academic circle’s cold position on neural network.
It took him 8 years to publish the paper in a journal.
Then, 2 years later, in 1984,
Yann LeCun, who was working on his doctoral dissertation on neural network,
found out about this paper.
And in 1986, neural network regained its power,
when David Rumelhart at University of California and Geoffrey Hinton
rearranged the back-propagation and spread it out to the world.
In 1989, Yann LeCun, in his postdoctoral course under Geoffrey Hinton,
made a turning point in thedevelopment of Deep Learning,
by applying back-propagation algorithm in CNN,
making it to recognize the hand written zip code digits in USA.