Support Vector Machine, classifying data in hyperplanes

In 1995, Corinna Cortes and Vladimir Vapnik

suggested a converted maximum margin classifier,

which allows mis-classified data.

suggested a converted maximum margin classifier,

which allows mis-classified data.

Assuming there isn’t a hyperplane which classifies the data

that features yes/no,

this classifier maximizes the distance between closely located data

that are correctly classified, and finds the hyperplane

which classifies the given data as accurate as possible.

that features yes/no,

this classifier maximizes the distance between closely located data

that are correctly classified, and finds the hyperplane

which classifies the given data as accurate as possible.

This idea was thought by Vladimir Vapnik and Alexey Chervonenksi in 1963.

After 30 years, in 1992, Bernhard Boser, Isabelle Guyon,

and Vladimir Vapnick suggested a nonlinear classifier

by applying kernel trick in order to maximize the margin.

and Vladimir Vapnick suggested a nonlinear classifier

by applying kernel trick in order to maximize the margin.

After further development, the next year, in 1993,

they finally published a paper with the soft margin version used today.

they finally published a paper with the soft margin version used today.

Let’s briefly go over the Support Vector Machine;SVM algorithm.

First, the linear SVM.

SVM looks for a straight line that can classify different groups.

Now, it looks for the line that can adjust the rate of misclassification

and maximize the margin.

First, the linear SVM.

SVM looks for a straight line that can classify different groups.

Now, it looks for the line that can adjust the rate of misclassification

and maximize the margin.

Let’s look at nonlinear SVM now.

Assuming that there is a data set located on vertical line,

it is difficult to judge.

Not until we apply the function:

f(x)=x**2 to the above value and convert it to the 2nd dimension.

Then, we find the line that enables judgment.

Assuming that there is a data set located on vertical line,

it is difficult to judge.

Not until we apply the function:

f(x)=x**2 to the above value and convert it to the 2nd dimension.

Then, we find the line that enables judgment.

This function here, like f(x)=x**2

that converts the data into another map, is called kernel in SVM.

There are various kernels, and in the case below,

we can find the hyperplane that enables judgement via the appropriate kernel.

that converts the data into another map, is called kernel in SVM.

There are various kernels, and in the case below,

we can find the hyperplane that enables judgement via the appropriate kernel.

Vladimir Vapnik is born in Uzbekistan.

He received his master’s degree in Mathematics from the Uzbek State University,

his Ph.D in Statistics at the Institute of Control Sciences.

He worked there from 1961 to 1990

as the Head of the Computer Science Research Department.

After the dissolution of the Soviet Union,

he moves to the Bell Labs in Holmdel, New Jersey in 1990.

He received his master’s degree in Mathematics from the Uzbek State University,

his Ph.D in Statistics at the Institute of Control Sciences.

He worked there from 1961 to 1990

as the Head of the Computer Science Research Department.

After the dissolution of the Soviet Union,

he moves to the Bell Labs in Holmdel, New Jersey in 1990.

The SVM suggested the solution to many problems\

the machine learning community was interested

including the handwriting recognition,

and became a star of the field.

the machine learning community was interested

including the handwriting recognition,

and became a star of the field.

Those two bet on a meal with that.

Yann LeCun, who came up with the CNN,

stood as witness for those two big figures.

stood as witness for those two big figures.

After 5 years, a bigger neural network could not clearly specify its limitations

and abilities like Larry Jackel said.

Nor did the SVM clearly surpass the neural network, like Vladimir Vapnik argued.

Lucky for Yann LeCun,

he enjoyed the luxury of getting a nice meal from both of them.

and abilities like Larry Jackel said.

Nor did the SVM clearly surpass the neural network, like Vladimir Vapnik argued.

Lucky for Yann LeCun,

he enjoyed the luxury of getting a nice meal from both of them.

The co-author of the SVM, Corinna Cortes, is a mother of two children

and a great marathoner at the same time.

and a great marathoner at the same time.

Do you need database performance monitoring? Contact us and we will send you a free quote

[email protected]

[email protected]

All Rights Reserved. © 2019 EZIS for Colud