Support vector machines (SVMs) perform linear regression in $\R^n$ by learning the optimal $w\in \R^n,\beta\in \R$ so that $$f(v) \approx v\cdot w+\beta.$$
SVMs perform linear classification by learning the optimal $w\in \R^n,\beta\in \R$ such that the hyperplane given by $$\{v\in \R^n\,|\, v\cdot w = \beta \}$$ separates subsets of $\R^n$.
Definition. Two disjoint sets $C_1,C_2 \subseteq \R^n$ are strictly separated by a hyperplane defined by $w\in \R^n,\beta\in \R$ when:
\begin{equation}
\sup_{v\in C_1} v\cdot w \,< \beta < \, \inf_{u\in C_2} u\cdot w
\end{equation}
Linear regression is performed by finding the optimal $h \in C_0(\mathbb{R}^3 \times S^2, \mathbb{R}), \beta \in \mathbb{R}$ so that $$f(\mu)\approx \langle\mu,h\rangle+\beta.$$
Linear classification is performed by learning $h \in C_0(\mathbb{R}^3 \times S^2, \mathbb{R})$ ,$\beta \in \mathbb{R}$ which define a codimension 1 subspace of $\mathcal{V}$ given by $$\{\mu\in \mathcal{V}\,|\,\langle \mu,h\rangle = \beta \}$$ that separate sets of varifolds.
Definition. Two disjoint sets $C_1,C_2 \subseteq \mathcal{V}$ are strictly separated by some test function $h$ when:
\begin{equation}
\sup_{\mu\in C_1} \langle \mu,h\rangle \,<\, \inf_{\nu\in C_2} \langle \nu,h\rangle
\end{equation}