Hyperplanes and You: Support Vector Machines
supraspatial decisionmaking
--
A core data science task is classification: grouping data points into various groups based on certain shared qualities.
In a sense, it’s an exercise as old as life itself: as soon as the first protozoan developed sensory organs, it (accidentally) started to act differently based on various sensory stimulus.
On a higher biological level, it’s a monkey looking at an object hanging from a branch and deciding “food” or “not food”.
On a machine level, it’s your ML model combing through credit transactions and deciding “fraud” or “not fraud”.
You’ve probably heard of clustering as a technique for classification; it’s easy enough to visualize on a two-dimensional graph, or even with a Z axis added in.
It’s intuitive, since we move about in three, maybe four dimensions.
But your data may be a little more complex than that (as far as axes are concerned), and the moment you have 4 columns in your table, you’re in high-dimensional space.
How do you draw balanced class distinctions in data with 70 features? One clever way is support vector machines, a geometric classification technique involving hyperplanes — which can be thought of as “decisionmaking boundaries”.
Multiplanar Thinking
In short, SVMs classify data points by drawing hyperplanes to maximize the overall distance between classes.
Hyperplanes are much simpler than they sound: a “subspace whose dimension is one less than that of its ambient space”.
In our previous 2D examples a hyperplane is a 1D line. In a graph with a Z axis, we’d have a 2D plane.
Let’s start with two dimensions. There’s plenty of lines you could draw to separate these points into two classes: