1) A Gentle Introduction to Support Vector Machines in Biomedicine by NYU Professors

The tutorials aims to cover the following problem areas in the Biomedical domain:

Build computational classification models (or “classifiers”) that assign patients/samples into two or more classes.

Build computational regression models to predict values of some continuous response variable or outcome.

Out of all measured variables in the dataset, select the smallest subset of variables that is necessary for the most accurate prediction (classification or regression) of some variable of interest (e.g., phenotypic response variable).

Build a computational model to identify novel or outlier patients/samples.

Group patients/samples into several clusters based on their similarity.

2) OpenCL™ Optimization Case Study: Support Vector Machine Training by AMD

In this article, AMD examines key kernels utilized in a Quadratic Programming solver for Support Vector Machine training. The tutorial optimizes the evaluation of the Radial Basis Function SVM kernel by examining a variety of different data structures as well as their performance implications, improving performance by a factor of 5 compared to naive code running on an AMD Radeon™ HD 5870 GPU. It discuss general rules of thumb that lead to efficient data structures for OpenCL™ computation.

3) A Tutorial on Support Vector Machines for Pattern Recognition

From the Abstract of the tutorial:

The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.

4) A tutorial on support vector machine-based methods for classification problems in chemometrics

From the abstract:

This tutorial provides a concise overview of support vector machines and different closely related techniques for pattern classification. The tutorial starts with the formulation of support vector machines for classification. The method of least squares support vector machines is explained. Approaches to retrieve a probabilistic interpretation are covered and it is explained how the binary classification techniques can be extended to multi-class methods. Kernel logistic regression, which is closely related to iteratively weighted least squares support vector machines, is discussed. Different practical aspects of these methods are addressed: the issue of feature selection, parameter tuning, unbalanced data sets, model evaluation and statistical comparison. The different concepts are illustrated on three real-life applications in the field of metabolomics, genetics and proteomics.

5) Training a Support Vector Machine in the Primal

Excerpt from the Introduction:

The vast majority of text books and articles introducing Support Vector Machines (SVMs) first state the primal optimization problem, and then go directly to the dual formulation. A reader could easily obtain the impression that this is the only possible way to train an SVM. In this paper, we would like to reveal this as being a misconception, and show that someone unaware of duality theory could train an SVM.]. One of the main contributions of this paper is to complement those studies to include the non-linear case. Our goal is not to claim that the primal optimization is better than dual, but merely to show that they are two equivalent ways of reaching the same result. Also, we will show that when the goal is to find an approximate solution, primal optimization is superior.

6) Financial time series forecasting using support vector machines

From the abstract

Support vector machines (SVMs) are promising methods for the prediction of -nancial timeseries because they use a risk function consisting of the empirical error and a regularized term which is derived from the structural risk minimization principle. This study applies SVM to predicting the stock price index. In addition, this study examines the feasibility of applying SVM in -nancial forecasting by comparing it withback-propagation neural networks and case-based reasoning. The experimental results show that SVM provides a promising alternative to stock market prediction.

7) A Tutorial on ν-Support Vector Machines

From the tutorial abstract:

We briefly describe the main ideas of statistical learning theory, support vector machines (SVMs), and kernel feature spaces. We place particular emphasis on a description of the so-called ν-SVM, including details of the algorithm and its implementation, theoretical results, and practical applications