ZEMCH 2015 - International Conference Proceedings | Page 248
or
where x are the training samples represents the bias and corresponds to the vector of weights for
each sample. Mathematically we need to solve the weights maximize the distance all the training
points x into two possible class values +1 or -1.
Regarding the kernel functions used, we included the most common approaches namely
Linear:
Polynomial:
and Radial Basis Function RBF:
Furthermore, the SVM one vs one approach (which has proven to perform better than its counterpart one vs all) enhances this naturally binary classification algorithm, giving it the possibility to
perform multi-class classification. The essence of this approach consists in classifying each new
point against two possible classes for each pair of classes possible. The class of choice at that first
stage will be used against the next class and so on.
3.2.3 KNN and HMM Approaches
We have compared the proposed approach against two other classification algorithms including
KNN and HMM.
In the case of the kNN prediction, apart from the popular euclidean, other measurements for distance where considered (i.e. mahalanobis and correlation), yet the variations were too small to be
significant achieving the best overall performances when using the typical euclidean approach.
The kNN model was evaluated using different number of neighbours for each scenario, including
each odd number from 3 to 101 and selecting the most accurate.
For the parameter estimation of the HMM algorithm, a Maximum Likelihood approach is used.
Therefore, when there is no specific observation or state transition in the training data, the HMM
algorithm is prone to errors due to 0 probability of transmission or emission. When this happens
246
ZEMCH 2015 | International Conference | Bari - Lecce, Italy