We can implicitly represent this line using a weight vector $w$ and a bias $b$. The line then corresponds to the set of points $x$ where

$$w^T x + b = 0.$$

In the case above, we have $w = (1, 1)^T$ and $b = 0$. Now, in order to test whether the point is blue or red, we just have to check whether it is above or below the line. This can be achieved by checking the sign of $w^T x + b$. If it is positive, then $x$ is above the line. If it is negative, then $x$ is below the line. Let’s perform this test for our example point $(3, 2)^T$:

$$

\begin{pmatrix}

1 & 1

\end{pmatrix}

\cdot \begin{pmatrix}

3 \\

2

\end{pmatrix} = 5

$$

Since 5 > 0, we know that the point is above the line and, therefore, should be classified as blue.

Perceptron definition

In general terms, a classifier is a function $\hat{c} : \mathbb{R}^d \rightarrow \{1, 2, …, C\}$ that maps a point onto one of $C$ classes. A binary classifier is a classifier where $C = 2$, i.e. we have two classes. A perceptron with weight $w \in \mathbb{R}^d$ and bias $b \in \mathbb{R}^d$ is a binary classifier where

$$

\hat{c}(x) =

\begin{cases}

1, & \text{if } w^T x + b \geq 0 \\

2, & \text{if } w^T x + b < 0

\end{cases}

$$

$\hat{c}$ partitions $\mathbb{R}^d$ into two half-spaces, each corresponding to one of the two classes. In the 2-dimensional example above, the partitioning is along a line. In general, the partitioning is along a $d-1$ dimensional hyperplane.