A Simplistic approach to understand working of kernels in SVM

Satyam
2 min readJun 14, 2022

Support vector machines (SVM) are one of the most robust and accurate ML algorithms.

In SVMs, Kernel plays a vital role. They are very helpful in solving a no-linear problem by using a linear classifier. SVM uses kernel-trick for transforming the data points and creating an optimal decision boundary.

In this blog, we the help of simple example, we will see how kernels transform data which is not linearly separable to make them linearly separable.

Let’s take some data to understand the same. We have 2 dimension data as shown in Fig. 1. And note that data points belong to two different classes. So, now we can see that this data-set is not linearly separable. Just by drawing a line, we cannot classify this data-set.

Code for 2D and plot
Fig. 1: 2D Data

It’s clear that we cannot classify the above data-set by a linear decision boundary, but, this data can be converted into a linear one using higher dimensions. This process is called kernel trick.

Let’s create one more dimension and name it as z.

We will calculate the dimensions for z by using the following equation

Z = x²+y² — eqn. (1)

By adding this dimension, we will get three-dimensional space as shown in the fig. 2.

Transformation Code: Adding third dimension
Code to Plot 3D data in Plotly
Fig.2: Transformed Data

Now you can see that the data has become linearly separable. Try rotating the above plot. We can see that data can be separated now using a hyperplane (2D plane in this case) parallel to x and y axis.

You can reach out to me on LinkedIn .

Follow me for more articles on Analytics and Data Science.

--

--