In this talk, principles and intuitions behind support vector machines will be exposed.
We will see what maximum margin is, why it is pursued and how it is achieved. We will
see also the unavoidable mathematical details of a SVM and a very intuitive geometrical
intrepretation of them. Next we will see how to solve non-linearly separable problems
using the kernel trick. This will allow us to understand mathematically and geometrically
what the hell a kernel is. With this knowledge we will learn the conditions a measure
must fulfill in order to be used as a kernel. Finally, we will present a successful
semi-supervised approach to SVMs: Transduction. We will see the foundations of
transduction, successful applications, why it is hard problem and approaches to
approximate a solution to the transduction problem.