MNIST Data Set, the Hello World of Deep Learning
If machine learning’s Hello Worldis the iris data set, it is MNIST in deep learning.
MNIST is a large database made of handwritten digits.
In 1986, NIST was figuring out a way to read and quickly classify postcodes.
It was not only because it was easy to make mistakes while sorting mails,
but also because it cost a lot of money.
In 1989, Yann LeCun presented CNN(Convolutional Neural Network) algorithm to the world, 
and thought deep learning could solve this problem.
It separated the numbers one by one from envelopes and went 
through linear transformation for each crooked digits into digit data in 40*60 pixels.
It was more challenging than it sounds like.
After this kind of complex processing, it was able to create CNN’s input data,
and successfully classified post codes.
At that time, people were very interested in letter recognition,
thus, many corporates were promoting the excellency of their visual recognition feature.
In 1991, Peter W. Frey and David J. State transformed 20 alphabet fonts designed
by Allen V. Hershey to 20.000 alphabet fonts.
Then, they tried to pull off 16 features to recognize letters.
NIST needed the sample to compare and evaluate them.
For this, it went through the process below to collect samples and make data set.
Yann LeCun thought that using this data set
would lead to a more advanced data set to conduct deep learning.
This is how MNIST was made based on the NIST data set.
MNIST collected data from hand writings of high school students, NIST SD-1(Special Database-1),
and of employees in the census bureau, NIST SD-3.
MNIST dataset was created via regularization, standardization, and revision from the data.
Today, deep learning researchers start deep learning from the MNIST.
This data has contributed greatly to the deep learning, where data is key.
Do you need database performance monitoring? Contact us and we will send you a free quote
[email protected]