Git Product home page Git Product logo

libsvm's Introduction

Libsvm is a simple, easy-to-use, and efficient software for SVM
classification and regression. It solves C-SVM classification, nu-SVM
classification, one-class-SVM, epsilon-SVM regression, and nu-SVM
regression. It also provides an automatic model selection tool for
C-SVM classification. This document explains the use of libsvm.

Libsvm is available at
http://www.csie.ntu.edu.tw/~cjlin/libsvm
Please read the COPYRIGHT file before using libsvm.

Table of Contents
=================

- Quick Start
- Installation and Data Format
- `svm-train' Usage
- `svm-predict' Usage
- `svm-scale' Usage
- Tips on Practical Use
- Examples
- Precomputed Kernels
- Library Usage
- Java Version
- Building Windows Binaries
- Additional Tools: Sub-sampling, Parameter Selection, Format checking, etc.
- MATLAB/OCTAVE Interface
- Python Interface
- Additional Information

Quick Start
===========

If you are new to SVM and if the data is not large, please go to
`tools' directory and use easy.py after installation. It does
everything automatic -- from data scaling to parameter selection.

Usage: easy.py training_file [testing_file]

More information about parameter selection can be found in
`tools/README.'

Installation and Data Format
============================

On Unix systems, type `make' to build the `svm-train', `svm-predict',
and `svm-scale' programs. Run them without arguments to show the
usages of them.

On other systems, consult `Makefile' to build them (e.g., see
'Building Windows binaries' in this file) or use the pre-built
binaries (Windows binaries are in the directory `windows').

The format of training and testing data files is:

<label> <index1>:<value1> <index2>:<value2> ...
.
.
.

Each line contains an instance and is ended by a '\n' character. 
While there can be no feature values for a sample (i.e., a row of all zeros), 
the <label> column must not be empty. For <label> in the training set, 
we have the following cases.

* classification: <label> is an integer indicating the class label
  (multi-class is supported).

* For regression, <label> is the target value which can be any real
  number.

* For one-class SVM, <label> has no effect and can be any number.

In the test set, <label> is used only to calculate accuracy or
errors. If it's unknown, any number is fine. For one-class SVM, if
non-outliers/outliers are known, their labels in the test file must be
+1/-1 for evaluation. The <label> column is read using strtod() provided by 
the C standard library. Therefore, <label> values that are numerically 
equivalent will be treated the same (e.g., +01e0 and 1 count as the same class).

The pair <index>:<value> gives a feature (attribute) value: <index> is
an integer starting from 1 and <value> is a real number. The only
exception is the precomputed kernel, where <index> starts from 0; see
the section of precomputed kernels. Indices must be in ASCENDING
order.

A sample classification data included in this package is
`heart_scale'. To check if your data is in a correct form, use
`tools/checkdata.py' (details in `tools/README').

Type `svm-train heart_scale', and the program will read the training
data and output the model file `heart_scale.model'. If you have a test
set called heart_scale.t, then type `svm-predict heart_scale.t
heart_scale.model output' to see the prediction accuracy. The `output'
file contains the predicted class labels.

For classification, if training data are in only one class (i.e., all
labels are the same), then `svm-train' issues a warning message:
`Warning: training data in only one class. See README for details,'
which means the training data is very unbalanced. The label in the
training data is directly returned when testing.

There are some other useful programs in this package.

svm-scale:

	This is a tool for scaling input data file.

svm-toy:

	This is a simple graphical interface which shows how SVM
	separate data in a plane. You can click in the window to
	draw data points. Use "change" button to choose class
	1, 2 or 3 (i.e., up to three classes are supported), "load"
	button to load data from a file, "save" button to save data to
	a file, "run" button to obtain an SVM model, and "clear"
	button to clear the window.

	You can enter options in the bottom of the window, the syntax of
	options is the same as `svm-train'.

	Note that "load" and "save" consider dense data format both in
	classification and the regression cases. For classification,
	each data point has one label (the color) that must be 1, 2,
	or 3 and two attributes (x-axis and y-axis values) in
	[0,1). For regression, each data point has one target value
	(y-axis) and one attribute (x-axis values) in [0, 1).

	Type `make' in respective directories to build them.

	You need Qt library to build the Qt version.
	(available from http://www.trolltech.com)

	You need GTK+ library to build the GTK version.
	(available from http://www.gtk.org)

	The pre-built Windows binaries are in the `windows'
	directory. We use Visual C++ on a 64-bit machine.

`svm-train' Usage
=================

Usage: svm-train [options] training_set_file [model_file]
options:
-s svm_type : set type of SVM (default 0)
	0 -- C-SVC		(multi-class classification)
	1 -- nu-SVC		(multi-class classification)
	2 -- one-class SVM
	3 -- epsilon-SVR	(regression)
	4 -- nu-SVR		(regression)
-t kernel_type : set type of kernel function (default 2)
	0 -- linear: u'*v
	1 -- polynomial: (gamma*u'*v + coef0)^degree
	2 -- radial basis function: exp(-gamma*|u-v|^2)
	3 -- sigmoid: tanh(gamma*u'*v + coef0)
	4 -- precomputed kernel (kernel values in training_set_file)
-d degree : set degree in kernel function (default 3)
-g gamma : set gamma in kernel function (default 1/num_features)
-r coef0 : set coef0 in kernel function (default 0)
-c cost : set the parameter C of C-SVC, epsilon-SVR, and nu-SVR (default 1)
-n nu : set the parameter nu of nu-SVC, one-class SVM, and nu-SVR (default 0.5)
-p epsilon : set the epsilon in loss function of epsilon-SVR (default 0.1)
-m cachesize : set cache memory size in MB (default 100)
-e epsilon : set tolerance of termination criterion (default 0.001)
-h shrinking : whether to use the shrinking heuristics, 0 or 1 (default 1)
-b probability_estimates : whether to train a model for probability estimates, 0 or 1 (default 0)
-wi weight : set the parameter C of class i to weight*C, for C-SVC (default 1)
-v n: n-fold cross validation mode
-q : quiet mode (no outputs)


option -v randomly splits the data into n parts and calculates cross
validation accuracy/mean squared error on them.

See libsvm FAQ for the meaning of outputs.

`svm-predict' Usage
===================

Usage: svm-predict [options] test_file model_file output_file
options:
-b probability_estimates: whether to predict probability estimates, 0 or 1 (default 0).

model_file is the model file generated by svm-train.
test_file is the test data you want to predict.
svm-predict will produce output in the output_file.

`svm-scale' Usage
=================

Usage: svm-scale [options] data_filename
options:
-l lower : x scaling lower limit (default -1)
-u upper : x scaling upper limit (default +1)
-y y_lower y_upper : y scaling limits (default: no y scaling)
-s save_filename : save scaling parameters to save_filename
-r restore_filename : restore scaling parameters from restore_filename

See 'Examples' in this file for examples.

Tips on Practical Use
=====================

* Scale your data. For example, scale each attribute to [0,1] or [-1,+1].
* For C-SVC, consider using the model selection tool in the tools directory.
* nu in nu-SVC/one-class-SVM/nu-SVR approximates the fraction of training
  errors and support vectors.
* If data for classification are unbalanced (e.g. many positive and
  few negative), try different penalty parameters C by -wi (see
  examples below).
* Specify larger cache size (i.e., larger -m) for huge problems.

Examples
========

> svm-scale -l -1 -u 1 -s range train > train.scale
> svm-scale -r range test > test.scale

Scale each feature of the training data to be in [-1,1]. Scaling
factors are stored in the file range and then used for scaling the
test data.

> svm-train -s 0 -c 5 -t 2 -g 0.5 -e 0.1 data_file

Train a classifier with RBF kernel exp(-0.5|u-v|^2), C=5, and
stopping tolerance 0.1.

> svm-train -s 3 -p 0.1 -t 0 data_file

Solve SVM regression with linear kernel u'v and epsilon=0.1
in the loss function.

> svm-train -c 10 -w1 1 -w-2 5 -w4 2 data_file

Train a classifier with penalty 10 = 1 * 10 for class 1, penalty 50 =
5 * 10 for class -2, and penalty 20 = 2 * 10 for class 4.

> svm-train -s 0 -c 100 -g 0.1 -v 5 data_file

Do five-fold cross validation for the classifier using
the parameters C = 100 and gamma = 0.1

> svm-train -s 0 -b 1 data_file
> svm-predict -b 1 test_file data_file.model output_file

Obtain a model with probability information and predict test data with
probability estimates

Precomputed Kernels
===================

Users may precompute kernel values and input them as training and
testing files.  Then libsvm does not need the original
training/testing sets.

Assume there are L training instances x1, ..., xL and.
Let K(x, y) be the kernel
value of two instances x and y. The input formats
are:

New training instance for xi:

<label> 0:i 1:K(xi,x1) ... L:K(xi,xL)

New testing instance for any x:

<label> 0:? 1:K(x,x1) ... L:K(x,xL)

That is, in the training file the first column must be the "ID" of
xi. In testing, ? can be any value.

All kernel values including ZEROs must be explicitly provided.  Any
permutation or random subsets of the training/testing files are also
valid (see examples below).

Note: the format is slightly different from the precomputed kernel
package released in libsvmtools earlier.

Examples:

	Assume the original training data has three four-feature
	instances and testing data has one instance:

	15  1:1 2:1 3:1 4:1
	45      2:3     4:3
	25          3:1

	15  1:1     3:1

	If the linear kernel is used, we have the following new
	training/testing sets:

	15  0:1 1:4 2:6  3:1
	45  0:2 1:6 2:18 3:0
	25  0:3 1:1 2:0  3:1

	15  0:? 1:2 2:0  3:1

	? can be any value.

	Any subset of the above training file is also valid. For example,

	25  0:3 1:1 2:0  3:1
	45  0:2 1:6 2:18 3:0

	implies that the kernel matrix is

		[K(2,2) K(2,3)] = [18 0]
		[K(3,2) K(3,3)] = [0  1]

Library Usage
=============

These functions and structures are declared in the header file
`svm.h'.  You need to #include "svm.h" in your C/C++ source files and
link your program with `svm.cpp'. You can see `svm-train.c' and
`svm-predict.c' for examples showing how to use them. We define
LIBSVM_VERSION and declare `extern int libsvm_version;' in svm.h, so
you can check the version number.

Before you classify test data, you need to construct an SVM model
(`svm_model') using training data. A model can also be saved in
a file for later use. Once an SVM model is available, you can use it
to classify new data.

- Function: struct svm_model *svm_train(const struct svm_problem *prob,
					const struct svm_parameter *param);

    This function constructs and returns an SVM model according to
    the given training data and parameters.

    struct svm_problem describes the problem:

	struct svm_problem
	{
		int l;
		double *y;
		struct svm_node **x;
	};

    where `l' is the number of training data, and `y' is an array containing
    their target values. (integers in classification, real numbers in
    regression) `x' is an array of pointers, each of which points to a sparse
    representation (array of svm_node) of one training vector.

    For example, if we have the following training data:

    LABEL    ATTR1    ATTR2    ATTR3    ATTR4    ATTR5
    -----    -----    -----    -----    -----    -----
      1        0        0.1      0.2      0        0
      2        0        0.1      0.3     -1.2      0
      1        0.4      0        0        0        0
      2        0        0.1      0        1.4      0.5
      3       -0.1     -0.2      0.1      1.1      0.1

    then the components of svm_problem are:

    l = 5

    y -> 1 2 1 2 3

    x -> [ ] -> (2,0.1) (3,0.2) (-1,?)
         [ ] -> (2,0.1) (3,0.3) (4,-1.2) (-1,?)
         [ ] -> (1,0.4) (-1,?)
         [ ] -> (2,0.1) (4,1.4) (5,0.5) (-1,?)
         [ ] -> (1,-0.1) (2,-0.2) (3,0.1) (4,1.1) (5,0.1) (-1,?)

    where (index,value) is stored in the structure `svm_node':

	struct svm_node
	{
		int index;
		double value;
	};

    index = -1 indicates the end of one vector. Note that indices must
    be in ASCENDING order.

    struct svm_parameter describes the parameters of an SVM model:

	struct svm_parameter
	{
		int svm_type;
		int kernel_type;
		int degree;	/* for poly */
		double gamma;	/* for poly/rbf/sigmoid */
		double coef0;	/* for poly/sigmoid */

		/* these are for training only */
		double cache_size; /* in MB */
		double eps;	/* stopping criteria */
		double C;	/* for C_SVC, EPSILON_SVR, and NU_SVR */
		int nr_weight;		/* for C_SVC */
		int *weight_label;	/* for C_SVC */
		double* weight;		/* for C_SVC */
		double nu;	/* for NU_SVC, ONE_CLASS, and NU_SVR */
		double p;	/* for EPSILON_SVR */
		int shrinking;	/* use the shrinking heuristics */
		int probability; /* do probability estimates */
	};

    svm_type can be one of C_SVC, NU_SVC, ONE_CLASS, EPSILON_SVR, NU_SVR.

    C_SVC:		C-SVM classification
    NU_SVC:		nu-SVM classification
    ONE_CLASS:		one-class-SVM
    EPSILON_SVR:	epsilon-SVM regression
    NU_SVR:		nu-SVM regression

    kernel_type can be one of LINEAR, POLY, RBF, SIGMOID.

    LINEAR:	u'*v
    POLY:	(gamma*u'*v + coef0)^degree
    RBF:	exp(-gamma*|u-v|^2)
    SIGMOID:	tanh(gamma*u'*v + coef0)
    PRECOMPUTED: kernel values in training_set_file

    cache_size is the size of the kernel cache, specified in megabytes.
    C is the cost of constraints violation.
    eps is the stopping criterion. (we usually use 0.00001 in nu-SVC,
    0.001 in others). nu is the parameter in nu-SVM, nu-SVR, and
    one-class-SVM. p is the epsilon in epsilon-insensitive loss function
    of epsilon-SVM regression. shrinking = 1 means shrinking is conducted;
    = 0 otherwise. probability = 1 means model with probability
    information is obtained; = 0 otherwise.

    nr_weight, weight_label, and weight are used to change the penalty
    for some classes (If the weight for a class is not changed, it is
    set to 1). This is useful for training classifier using unbalanced
    input data or with asymmetric misclassification cost.

    nr_weight is the number of elements in the array weight_label and
    weight. Each weight[i] corresponds to weight_label[i], meaning that
    the penalty of class weight_label[i] is scaled by a factor of weight[i].

    If you do not want to change penalty for any of the classes,
    just set nr_weight to 0.

    *NOTE* Because svm_model contains pointers to svm_problem, you can
    not free the memory used by svm_problem if you are still using the
    svm_model produced by svm_train().

    *NOTE* To avoid wrong parameters, svm_check_parameter() should be
    called before svm_train().

    struct svm_model stores the model obtained from the training procedure.
    It is not recommended to directly access entries in this structure.
    Programmers should use the interface functions to get the values.

	struct svm_model
	{
		struct svm_parameter param;	/* parameter */
		int nr_class;		/* number of classes, = 2 in regression/one class svm */
		int l;			/* total #SV */
		struct svm_node **SV;		/* SVs (SV[l]) */
		double **sv_coef;	/* coefficients for SVs in decision functions (sv_coef[k-1][l]) */
		double *rho;		/* constants in decision functions (rho[k*(k-1)/2]) */
		double *probA;		/* pairwise probability information */
		double *probB;
		double *prob_density_marks;	/*probability information for ONE_CLASS*/
		int *sv_indices;        /* sv_indices[0,...,nSV-1] are values in [1,...,num_traning_data] to indicate SVs in the training set */

		/* for classification only */

		int *label;		/* label of each class (label[k]) */
		int *nSV;		/* number of SVs for each class (nSV[k]) */
					/* nSV[0] + nSV[1] + ... + nSV[k-1] = l */
		/* XXX */
		int free_sv;		/* 1 if svm_model is created by svm_load_model*/
					/* 0 if svm_model is created by svm_train */
	};

    param describes the parameters used to obtain the model.

    nr_class is the number of classes for classification. It is a
    non-negative integer with special cases of 0 (no training data at
    all) and 1 (all training data in one class). For regression and
    one-class SVM, nr_class = 2.

    l is the number of support vectors. SV and sv_coef are support
    vectors and the corresponding coefficients, respectively. Assume there are
    k classes. For data in class j, the corresponding sv_coef includes (k-1) y*alpha vectors,
    where alpha's are solutions of the following two class problems:
    1 vs j, 2 vs j, ..., j-1 vs j, j vs j+1, j vs j+2, ..., j vs k
    and y=1 for the first j-1 vectors, while y=-1 for the remaining k-j
    vectors. For example, if there are 4 classes, sv_coef and SV are like:

        +-+-+-+--------------------+
        |1|1|1|                    |
        |v|v|v|  SVs from class 1  |
        |2|3|4|                    |
        +-+-+-+--------------------+
        |1|2|2|                    |
        |v|v|v|  SVs from class 2  |
        |2|3|4|                    |
        +-+-+-+--------------------+
        |1|2|3|                    |
        |v|v|v|  SVs from class 3  |
        |3|3|4|                    |
        +-+-+-+--------------------+
        |1|2|3|                    |
        |v|v|v|  SVs from class 4  |
        |4|4|4|                    |
        +-+-+-+--------------------+

    See svm_train() for an example of assigning values to sv_coef.

    rho is the bias term (-b). probA and probB are parameters used in
    probability outputs. If there are k classes, there are k*(k-1)/2
    binary problems as well as rho, probA, and probB values. They are
    aligned in the order of binary problems:
    1 vs 2, 1 vs 3, ..., 1 vs k, 2 vs 3, ..., 2 vs k, ..., k-1 vs k.

    sv_indices[0,...,nSV-1] are values in [1,...,num_traning_data] to
    indicate support vectors in the training set.

    label contains labels in the training data.

    nSV is the number of support vectors in each class.

    free_sv is a flag used to determine whether the space of SV should
    be released in free_model_content(struct svm_model*) and
    free_and_destroy_model(struct svm_model**). If the model is
    generated by svm_train(), then SV points to data in svm_problem
    and should not be removed. For example, free_sv is 0 if svm_model
    is created by svm_train, but is 1 if created by svm_load_model.

- Function: double svm_predict(const struct svm_model *model,
                               const struct svm_node *x);

    This function does classification or regression on a test vector x
    given a model.

    For a classification model, the predicted class for x is returned.
    For a regression model, the function value of x calculated using
    the model is returned. For an one-class model, +1 or -1 is
    returned.

- Function: void svm_cross_validation(const struct svm_problem *prob,
	const struct svm_parameter *param, int nr_fold, double *target);

    This function conducts cross validation. Data are separated to
    nr_fold folds. Under given parameters, sequentially each fold is
    validated using the model from training the remaining. Predicted
    labels (of all prob's instances) in the validation process are
    stored in the array called target.

    The format of svm_prob is same as that for svm_train().

- Function: int svm_get_svm_type(const struct svm_model *model);

    This function gives svm_type of the model. Possible values of
    svm_type are defined in svm.h.

- Function: int svm_get_nr_class(const svm_model *model);

    For a classification model, this function gives the number of
    classes. For a regression or an one-class model, 2 is returned.

- Function: void svm_get_labels(const svm_model *model, int* label)

    For a classification model, this function outputs the name of
    labels into an array called label. For regression and one-class
    models, label is unchanged.

- Function: void svm_get_sv_indices(const struct svm_model *model, int *sv_indices)

    This function outputs indices of support vectors into an array called sv_indices.
    The size of sv_indices is the number of support vectors and can be obtained by calling svm_get_nr_sv.
    Each sv_indices[i] is in the range of [1, ..., num_traning_data].

- Function: int svm_get_nr_sv(const struct svm_model *model)

    This function gives the number of total support vector.

- Function: double svm_get_svr_probability(const struct svm_model *model);

    For a regression model with probability information, this function
    outputs a value sigma > 0. For test data, we consider the
    probability model: target value = predicted value + z, z: Laplace
    distribution e^(-|z|/sigma)/(2sigma)

    If the model is not for svr or does not contain required
    information, 0 is returned.

- Function: double svm_predict_values(const svm_model *model,
				    const svm_node *x, double* dec_values)

    This function gives decision values on a test vector x given a
    model, and return the predicted label (classification) or
    the function value (regression).

    For a classification model with nr_class classes, this function
    gives nr_class*(nr_class-1)/2 decision values in the array
    dec_values, where nr_class can be obtained from the function
    svm_get_nr_class. The order is label[0] vs. label[1], ...,
    label[0] vs. label[nr_class-1], label[1] vs. label[2], ...,
    label[nr_class-2] vs. label[nr_class-1], where label can be
    obtained from the function svm_get_labels. The returned value is
    the predicted class for x. Note that when nr_class = 1, this
    function does not give any decision value.

    For a regression model, dec_values[0] and the returned value are
    both the function value of x calculated using the model. For a
    one-class model, dec_values[0] is the decision value of x, while
    the returned value is +1/-1.

- Function: double svm_predict_probability(const struct svm_model *model,
	    const struct svm_node *x, double* prob_estimates);

    This function does classification or regression on a test vector x
    given a model with probability information.

    For a classification model with probability information, this
    function gives nr_class probability estimates in the array
    prob_estimates. nr_class can be obtained from the function
    svm_get_nr_class. The class with the highest probability is
    returned. For one-class SVM, the array prob_estimates contains
    two elements for probabilities of normal instance/outlier,
    while for regression, the array is unchanged. For both one-class
    SVM and regression, the returned value is the same as that of
    svm_predict.

- Function: const char *svm_check_parameter(const struct svm_problem *prob,
                                            const struct svm_parameter *param);

    This function checks whether the parameters are within the feasible
    range of the problem. This function should be called before calling
    svm_train() and svm_cross_validation(). It returns NULL if the
    parameters are feasible, otherwise an error message is returned.

- Function: int svm_check_probability_model(const struct svm_model *model);

    This function checks whether the model contains required
    information to do probability estimates. If so, it returns
    +1. Otherwise, 0 is returned. This function should be called
    before calling svm_get_svr_probability and
    svm_predict_probability.

- Function: int svm_save_model(const char *model_file_name,
			       const struct svm_model *model);

    This function saves a model to a file; returns 0 on success, or -1
    if an error occurs.

- Function: struct svm_model *svm_load_model(const char *model_file_name);

    This function returns a pointer to the model read from the file,
    or a null pointer if the model could not be loaded.

- Function: void svm_free_model_content(struct svm_model *model_ptr);

    This function frees the memory used by the entries in a model structure.

- Function: void svm_free_and_destroy_model(struct svm_model **model_ptr_ptr);

    This function frees the memory used by a model and destroys the model
    structure. It is equivalent to svm_destroy_model, which
    is deprecated after version 3.0.

- Function: void svm_destroy_param(struct svm_parameter *param);

    This function frees the memory used by a parameter set.

- Function: void svm_set_print_string_function(void (*print_func)(const char *));

    Users can specify their output format by a function. Use
        svm_set_print_string_function(NULL);
    for default printing to stdout.

Java Version
============

The pre-compiled java class archive `libsvm.jar' and its source files are
in the java directory. To run the programs, use

java -classpath libsvm.jar svm_train <arguments>
java -classpath libsvm.jar svm_predict <arguments>
java -classpath libsvm.jar svm_toy
java -classpath libsvm.jar svm_scale <arguments>

Note that you need Java 1.5 (5.0) or above to run it.

You may need to add Java runtime library (like classes.zip) to the classpath.
You may need to increase maximum Java heap size.

Library usages are similar to the C version. These functions are available:

public class svm {
	public static final int LIBSVM_VERSION=332;
	public static svm_model svm_train(svm_problem prob, svm_parameter param);
	public static void svm_cross_validation(svm_problem prob, svm_parameter param, int nr_fold, double[] target);
	public static int svm_get_svm_type(svm_model model);
	public static int svm_get_nr_class(svm_model model);
	public static void svm_get_labels(svm_model model, int[] label);
	public static void svm_get_sv_indices(svm_model model, int[] indices);
	public static int svm_get_nr_sv(svm_model model);
	public static double svm_get_svr_probability(svm_model model);
	public static double svm_predict_values(svm_model model, svm_node[] x, double[] dec_values);
	public static double svm_predict(svm_model model, svm_node[] x);
	public static double svm_predict_probability(svm_model model, svm_node[] x, double[] prob_estimates);
	public static void svm_save_model(String model_file_name, svm_model model) throws IOException
	public static svm_model svm_load_model(String model_file_name) throws IOException
	public static String svm_check_parameter(svm_problem prob, svm_parameter param);
	public static int svm_check_probability_model(svm_model model);
	public static void svm_set_print_string_function(svm_print_interface print_func);
}

The library is in the "libsvm" package.
Note that in Java version, svm_node[] is not ended with a node whose index = -1.

Users can specify their output format by

	your_print_func = new svm_print_interface()
	{
		public void print(String s)
		{
			// your own format
		}
	};
	svm.svm_set_print_string_function(your_print_func);

Building Windows Binaries
=========================

Windows binaries are available in the directory `windows'. To re-build
them via Visual C++, use the following steps:

1. Open a DOS command box (or Visual Studio Command Prompt) and change
to libsvm directory. If environment variables of VC++ have not been
set, type

"C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvars64.bat"

You may have to modify the above command according which version of
VC++ or where it is installed.

2. Type

nmake -f Makefile.win clean all

3. (optional) To build shared library libsvm.dll, type

nmake -f Makefile.win lib

4. (optional) To build 32-bit windows binaries, you must
	(1) Setup "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvars32.bat" instead of vcvars64.bat
	(2) Change CFLAGS in Makefile.win: /D _WIN64 to /D _WIN32

Another way is to build them from Visual C++ environment. See details
in libsvm FAQ.

- Additional Tools: Sub-sampling, Parameter Selection, Format checking, etc.
============================================================================

See the README file in the tools directory.

MATLAB/OCTAVE Interface
=======================

Please check the file README in the directory `matlab'.

Python Interface
================

See the README file in python directory.

Additional Information
======================

If you find LIBSVM helpful, please cite it as

Chih-Chung Chang and Chih-Jen Lin, LIBSVM : a library for support
vector machines. ACM Transactions on Intelligent Systems and
Technology, 2:27:1--27:27, 2011. Software available at
http://www.csie.ntu.edu.tw/~cjlin/libsvm

LIBSVM implementation document is available at
http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf

For any questions and comments, please email [email protected]

Acknowledgments:
This work was supported in part by the National Science
Council of Taiwan via the grant NSC 89-2213-E-002-013.
The authors thank their group members and users
for many helpful discussions and comments. They are listed in
http://www.csie.ntu.edu.tw/~cjlin/libsvm/acknowledgements

libsvm's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

libsvm's Issues

CppCheck errors for realloc() usage

I'd like to report that CppCheck is reporting issues with a few of the C/C++ files' use of realloc without testing to ensure the result isn't NULL, resulting in possible memory leaks.

You can gather results by running:

cppcheck --quiet /path/to/libsvm

[/src/libsvm/matlab/libsvmread.c:48]: (error) Common realloc mistake: 'line' nulled but not freed upon failure
[/src/libsvm/svm-predict.c:31]: (error) Common realloc mistake: 'line' nulled but not freed upon failure
[/src/libsvm/svm-predict.c:96]: (error) Common realloc mistake: 'x' nulled but not freed upon failure
[/src/libsvm/svm-scale.c:342]: (error) Common realloc mistake: 'line' nulled but not freed upon failure
[/src/libsvm/svm-train.c:75]: (error) Common realloc mistake: 'line' nulled but not freed upon failure
[/src/libsvm/svm.cpp:2042]: (error) Common realloc mistake: 'label' nulled but not freed upon failure
[src/libsvm/svm.cpp:2043]: (error) Common realloc mistake: 'count' nulled but not freed upon failure
[/src/libsvm/svm.cpp:2757]: (error) Common realloc mistake: 'line' nulled but not freed upon failure
[/src/libsvm/svm.cpp:3137]: (error) Common realloc mistake: 'label' nulled but not freed upon failure
[/src/libsvm/svm.cpp:3138]: (error) Common realloc mistake: 'count' nulled but not freed upon failure

Drilling into the first one:

...
line = (char *) realloc(line, max_line_len);
...

To fix these, you should check to see if realloc returns NULL. If it does, then free(line). If not, then assign the pointer to line. Without this, line will be assigned to NULL and the original object pointed to by line will dangle. More detailed guidance at:

Thanks!

strange results with polynomial kernel

Hello,

For some reason I get very strange classifications with polynomial kernel. I have 966 training instances and 518 test instances. With polynomial kernel I have only negative classifications. With any other kernel I have different results with small variance (accuracy is approximately 35%).

The problem is I don't understand why polynomial kernel gives these non-meaningful results. How I can debug it?

Possible bugs in one-class SVM -- the parity of the number of data may break the performance.

I am now using one-class SVM to learn a model from a dataset with 271 data.
All of the 271 data's label is +1, of course.
To measure the performance of a model, I first take a look at the accuracy of the results on training data -- it shouldn't be too bad, at least.

Here are the steps that I found and confirmed the problem:

I. I train a model by the first 270 data, the model I get correctly predicts that the last datum should be positive, and the accuracy of the results on training data seems good.

II. When I train a model by all these 271 data, the performance of the model on training data suddenly drops a lot, and it even predicts the last datum to be negative.

A possible reason for this phenomenon may be that the last datum is overfitted, but it is hard to believe that a model derived from 270 data can be changed so much by merely a datum and that the overfitted datum is predicted to be negative.

III. To make the model even fits the last datum more, I train a model by 272 data -- the original 271 data with one copy of the last datum. And out of my expectation, the performance becomes good again.

This doesn't make sense if the reason is overfitting. So I guess that the problem is on the parity of the size of the data. To test my guess here is step IV.

IV. To avoid the possibility that the last datum is weird, I use only the first 270 data to do this test. Every time I randomly choose i data (i = 0~9) and duplicate them to form a dataset with size 270 + i. Then I train a model on the dataset and see its performance. For each i, I will run 30 times and pick the average value as the result. The results obviously show that when i is odd, the performance will be very awful, while this phenomenon never appears in the cases i is even.

This is the code of the 4 tests mentioned above. Though I use sklearn in the code, this problem can be reproduced by directly using libsvm as well (with gamma=1/n_features, which is the 'auto' value in sklearn). The dataset can be downloaded here. For now, a solution is that the user can always keep the size of the training dataset to be even.

Java libsvm "reaching max number of iterations", libsvm.dll is not

Same data,c_type,parameters,the Java libsvm will "reaching max number of iterations",but libsvm.dll is not.
Version: 3.21
OS:Win 7 x64

Data:https://github.com/idlesysman/java/blob/master/file/trainData.zip

Parameters:
param.svm_type = svm_parameter.C_SVC;
param.kernel_type = svm_parameter.RBF;
param.gamma = 0.36;
param.C = 10;

svm_cross_validation.nr_fold=10

Using default values:
param.degree = 3;
param.coef0 = 0;
param.nu = 0.5;
param.cache_size = 100;
param.eps = 1e-3;
param.p = 0.1;
param.shrinking = 1;
param.probability = 0;
param.nr_weight = 0;
param.weight_label = new int[0];

param.weight = new double[0];

thanks!

grid.py waits for results even though all workers has stopped

The program would wait for a result even though all workers had quit because of an error or a C-c. This isn't the most elegant fix, but it is the only one I could manage in the time I had.

Author: Bjarte Johansen <[email protected]>
Date:   Tue Dec 2 15:51:34 2014 +0100

    Fix waiting for results when there are no workers

    The program would wait for a result even though all workers had quit
    because of an error or a C-c.

diff --git a/tools/grid.py b/tools/grid.py
index 40f55fb..7c5b744 100755
--- a/tools/grid.py
+++ b/tools/grid.py
@@ -390,6 +390,7 @@ def find_parameters(dataset_pathname, options=''):

    job_queue._put = job_queue.queue.appendleft

+   workers = []
    # fire telnet workers

    if telnet_workers:
@@ -400,6 +401,7 @@ def find_parameters(dataset_pathname, options=''):
            worker = TelnetWorker(host,job_queue,result_queue,
                     host,username,password,options)
            worker.start()
+           workers.append(worker)

    # fire ssh workers

@@ -407,12 +409,14 @@ def find_parameters(dataset_pathname, options=''):
        for host in ssh_workers:
            worker = SSHWorker(host,job_queue,result_queue,host,options)
            worker.start()
+           workers.append(worker)

    # fire local workers

    for i in range(nr_local_worker):
        worker = LocalWorker('local',job_queue,result_queue,options)
        worker.start()
+       workers.append(worker)

    # gather results

@@ -436,7 +440,11 @@ def find_parameters(dataset_pathname, options=''):
    for line in jobs:
        for (c,g) in line:
            while (c,g) not in done_jobs:
-               (worker,c1,g1,rate1) = result_queue.get()
+               while any(map(Thread.is_alive, workers)):
+                   try:
+                       (worker,c1,g1,rate1) = result_queue.get(True, 1)
+                   except:
+                       continue
                done_jobs[(c1,g1)] = rate1
                if (c1,g1) not in resumed_jobs:
                    best_c,best_g,best_rate = update_param(c1,g1,rate1,best_c,best_g,best_rate,worker,False)

make.m problem in win10 & MinGW64 compiler

I am on a windows 10 with matlab r2015b and MinGW64. When I run make.m I encountered with gcc: error: \-fexceptions: No such file or directory. I solved it by changing CFLAGS to COMPFLAGS.

Load and save from a memory buffer

I want to manage a model database without using temporary files.
For this, I propose an API extension:

int svm_save_model_buffer(const char *model_buffer, int buffer_length, const struct svm_model *model);
struct svm_model *svm_load_model_buffer(const char *model_buffer, int buffer_length);

svm_save_model_buffer saves a model to a buffer; returns the written size on success, or -1
if an error occurs.

svm_load_model_buffer returns a pointer to the model read from the buffer,
or a null pointer if the model could not be loaded.

Cache size > 2000 not recognised

Noticed this while using libSVM in sklearn - training an SVM when cache_size > 2000 or so on large problems does not seem to lead to any benefit/speed up. Looking at RAM usage, it shows that usage is still about 200MB (which is roughly the original dataset size, rather than the Kernel matrix size). Looks like the issue is in svm.cpp, where the cache size is set to (long int) cache_size*(1<<20). I suspect this overflows for cases where for example, cache_size=4000.

Testing done using Anaconda 2.4.1 on Windows 8.1, x64 processor.

Checking for users before running program

Hi again,

I am using the lab machines at my university, but I don't want to inconvenience others if they are sitting there. I had some problems implementing that with your grid.py, but I discovered that I could reimplement most of the functionality (that I needed) through gnu parallel instead.

#!/usr/bin/env bash

LOGFILE=$(mktemp "XXXX.parallel.log")

function unused {
    parallel --plain                                                \
             --sshloginfile ..                                      \
             --nonall                                               \
             --tag                                                  \
             '[[ -z $(users | sed "s/$USER//") ]] && echo "unused"' \
        | sed -e 's/\s*unused//'                                    \
              -e 's/^/4\//'
}

function exit_parallel {
    parallel --plain                                \
             --sshloginfile ..                      \
             --nonall                               \
             'killall -q -u $USER svm-train'
    rm "$LOGFILE"
}

trap 'echo "Ctrl-C detected.";                  \
          exit_parallel;                        \
          exit 130'                             \
     SIGINT SIGQUIT

parallel --plain                                    \
         --sshloginfile <(unused)                   \
         --filter-hosts                             \
         --joblog "$LOGFILE"                        \
         --resume-failed                            \
         --timeout 28800                            \
         --tag                                      \
         'nice svm-train -q                         \
                         -m 1024                    \
                         -h 0                       \
                         -v 5                       \
                         -c $(echo 2^{1} | bc -l)   \
                         -g $(echo 2^{2} | bc -l)   \
                 "'$DATA'"                          \
              | sed -e "s/Cross .* = //"            \
                    -e "s/%//"'                     \
         ::: {-5..15..2}                            \
         ::: {3..-15..-2}

This does make some assumptions according to my environment (like the home folder always being the same on every machine). You also need to configure the script directly in the script. I just thought I would tell you as you might be interested in it (or someone else following this repository).

pred_label is not corresponding to pre[pred_label,accuracy,prob_estimates]=svmpredict(test_label,test_data,model,'-b 1');

In pred_estimates, the position of the maximum value in one row is not the pred_label.

prob_estimates=0.0877046072932294 0.00689885694870784 0.0510358500866629 0.0349193526856883 0.0201649925974930 0.0572772003038145 0.00354058458641571 0.434801642194089 0.299917382939861 0.00373953036403846
0.0569029578292815 0.0128889675010719 0.0226503273265042 0.235434067349005 0.0274432928060539 0.0223993134855364 0.00449010998588290 0.372552666098744 0.240135267815857 0.00510302980206350
0.0202618302729419 0.00609933422466536 0.00513096031248623 0.556599397170814 0.00770208398129837 0.0147579801297493 0.00117991662750405 0.123657362841668 0.263114176688193 0.00149695775068038
0.0138081408458132 0.0134109236889526 0.0261085782234978 0.379193740840643 0.0136879003688169 0.0278073190536115 0.00393368980397517 0.0770472704849993 0.441814426316247 0.00318801037344396
0.00737713942130312 0.00719604777712997 0.0190913916210304 0.766244252381808 0.00452925052833849 0.00832667922291313 0.000954117402067928 0.0242901169949015 0.160638180981833 0.00135282366867416
0.0404892166770354 0.0131597503809068 0.00812199404116744 0.0709609923235642 0.0256930503037581 0.0235279018153220 0.00310139828330843 0.190031347957460 0.620513527363689 0.00440082085378811
0.0233835236239888 0.00845191161599723 0.0204935753653564 0.0455692904676819 0.0733235759739852 0.0506628894366520 0.00506885299370543 0.0616635058140948 0.707636494571399 0.00374638013713891
0.0170375648555056 0.0117320006702165 0.0351611195497132 0.0329173319877270 0.0199963414010635 0.0233994742806957 0.00202927477235737 0.0367208285826699 0.809862937232914 0.0111431266671378
0.00249398437648049 0.00118775906267820 0.00420318445065694 0.00150617919781232 0.00851998127528923 0.0170565271724271 0.000216428383816490 0.00376026537283618 0.959633934361137 0.00142175634686587
0.0172952090357666 0.00203085205048888 0.0663402503175806 0.00337260286616463 0.0192039462603744 0.0368582111255392 0.00135917901052383 0.0574097352946263 0.753632624031509 0.0424973900074269
0.000537446336879699 2.15124069324038e-05 0.00561621351527447 0.00185702285684450 8.83922926914827e-05 0.000224499754837717 3.52901270154285e-05 7.98180192575826e-06 0.00140238264873741 0.990209258258861
0.00800564331385912 0.000723560253424673 0.00910529583487216 0.0517704592247967 0.00127130082212482 0.000836563684457387 0.0318391492343732 0.00144605301838154 0.00150543142470253 0.893496543189008
0.00597297494217007 0.00213978859351707 0.0254911330116816 0.0667699924629301 0.00264826602958697 0.00124512773832485 0.0281686476611874 0.00111821134747100 0.00185613966496867 0.864589718548163
0.737566350619235 0.00377211871885958 4.93077713563539e-05 0.00382722305898168 0.000840960416034537 7.15881072403622e-05 0.000763927380789079 0.000475734513815691 0.000236976483126986 0.252395812930561
0.485882559427058 0.179818235887341 0.00491168109845924 0.0735388345615359 0.0108076830998238 0.00347030734890046 0.00671528385336261 0.0153990089988476 0.00948964326287568 0.209966762461795

pred_label=6
6
4
9
4
9
9
9
9
9
8
8
8
7
7

Octave 4.0 and parallelised LibSVM don't work together

I'm using Octave 4.0.0 on Kubuntu 15.10 (yes, the beta) on a 64bit machine. I applied the updated rules that were mentioned in April. I can compile without error message using 'make.m'. However, I cannot run it.

N = 10000;
L = randi(2,1,N);
D = [randn(1,N/2) randn(1,N/2)+1];
model = svmtrain(L',D');
error: /opt/libsvm/octave/svmtrain.mex: failed to load: /opt/libsvm/octave/svmtrain.mex: undefined symbol: GOMP_loop_guided_start

How can this error be resolved?

MEX File crash for regression SVM in MATLAB

The MEX interface to Regression SVMs appear to be crashing in MATLAB - we use the classification SVMs widely, but having seg-faults with option: -s 3

See the attached xy.csv, then run:

clearvars;
close all;

xy = dlmread('xy.csv');
x = xy(:,1);
y = xy(:,2);

model = svmtrain(y, x, '-s 3');
out = svmpredict(y, x, model);

This causes a SEG FAULT in WIndows & OSX.
xy.zip

Wrong information in Octave when training C-SVC and NU-SVC

When I train a nu-SVC in Octave with the command

model = svmtrain(ytrain, Xtrain_norm, '-s 1 -t 2');

I get this output

*
optimization finished, #iter = 570
C = 0.085946
obj = 17.382239, rho = -0.596579
nSV = 859, nBSV = 808
Total nSV = 859

At the beginning I was puzzled by that "C = 0.085946", which had led me into thinking that a C-SVM was trained instead, and that there was an error in the libraries...

Also because if I use the "-s 0" argument (which means C-SVC) it outputs:

model = svmtrain(ytrain, Xtrain_norm, '-s 0 -t 2');
*
optimization finished, #iter = 595
nu = 0.227101
obj = -279.128990, rho = -0.810343
nSV = 430, nBSV = 328
Total nSV = 430

So I was thinking that the two arguments were swapped.

I went a little bit further and I tried running the svm-train binary with the same arguments:

svm-train -s 1 -t 2 trainingset_libsvm.dat model_libsvm_NU.dat

Excact same output as above but inside the created file I found:

svm_type nu_svc
kernel_type rbf
gamma 0.0833333 nr_class 2 total_sv 859 rho -0.596582 label 0 1 nr_sv 428 431 SV`

So is it just the information printed that is wrong? Or is it correct and I'm not understanding something?

Thanks

bug - easy.py throws ValueError

easy.py throws an error, even on standard datasets (e.g., iris):

Traceback (most recent call last):
  File "easy.py", line 63, in <module>
    c,g,rate = map(float,last_line.split())
ValueError: need more than 0 values to unpack

This was observed in Windows but has been reported for other OS's elsewhere.

Zeroed weights for entire class

I know that it's weird usage of class weights, but stil, could it be explained somehow? Or fixed?

dataset.txt:

0 1:0 2:0 3:0
0 1:0 2:0 3:1
0 1:0 2:1 3:0
1 1:0 2:1 3:1
1 1:1 2:0 3:0
1 1:1 2:0 3:1
2 1:1 2:1 3:0
2 1:1 2:1 3:1

code:

libsvm-3.20$ ./svm-train -b 1 -w0 1 -w1 1 -w2 0 dataset.txt model
libsvm-3.20$ ./svm-predict -b 1 dataset.txt model predictions.out

It produces in predictions.out:

labels 0 1 2
2 3.31221e-14 3.30357e-14 1
2 3.63995e-14 3.24543e-14 1
2 3.36039e-14 3.30595e-14 1
2 3.77311e-14 3.12876e-14 1
2 3.86737e-14 2.78238e-14 1
2 3.82377e-14 2.50579e-14 1
2 3.84825e-14 2.96375e-14 1
2 3.84239e-14 2.58019e-14 1

How to get the alpha_i * y_i in Libsvm 3.22 ?

as my title says,I want to get the alpha_i * y_i in Libsvm 3.22 but don't know how to do it.
the sv_coef now is a double[][] array and I can't get a_i * y_i just use the model.sv_coef[i] like most past answers
I asked the same question in http://stackoverflow.com/q/43348979/3097907 there are some more information there.
I hope anyone can help me with this question ,Thank you.

PS: my original problem is solving this Formula
gradient(J) = -0.5 X sum(a*_i X a*_j X y_i X y_j X Km(x_i,x_j) )
(from SimpleMKL formula.11 )

MathCad

Someone tell me how to translate the source code in the same libsvm MathCAD?

potential bug - the Matlab interface

I tried to run a very simple binary classification via the matlab interface of libsvm
where

class A : [ 1, 1]
class B : [-1,-1] and [ 1, -1 ]

but got wrong prediction results (compared to the python interface)
cases [-1,-1] and [1,-1] are all wrong.

here is the sample code

N=500;
A_pts = repmat([1,1],N*2,1);
A_label = ones(size(A_pts,1),1);

B_pts = repmat([-1,-1],N,1);
B_pts = cat(1,B_pts, repmat([1,-1],N,1));    
B_label = -1*ones(size(B_pts,1),1);


x = [ A_pts ; B_pts ];
y = [ A_label ; B_label ];


svmmodel = svmtrain(x,y);
svmpredict(1,[1,1],svmmodel)
svmpredict(-1,[-1,-1],svmmodel) % wrong 
svmpredict(-1,[1,-1],svmmodel) % wrong 

output:

optimization finished, #iter = 500
nu = 0.500000
obj = -1000.000000, rho = -1.000000
nSV = 1000, nBSV = 1000
Total nSV = 1000

Use libsvm in hadoop

Hello, everyone.
I want to ask something. Can i use libsvm in apache hadoop ?
Is it work with map reduce programming model in hadoop ?

32-bit and 64-bit DLLs?

I was working on the getting the Julia language binding to LIBSVM working on Windows, and was wondering if you could add a 32-bit and 64-bit version of libsvm.dll to your makefile and repository? I think the current file is 32-bit only.

A bug in svm.java in sigmoid_train method

If anybody wants to use the probability outputs of the svm, it goes wrong and the output label is always the same. The problem is in the line 1672:

if (iter>=max_iter)
    //svm.info("Reaching maximal iterations in two-class probability estimates\n");

probAB[0]=A;probAB[1]=B;

As you can see, the next line after the commented part of the if statement will be fired if the statement in the if is true. Thus, the results are always wrong.

Solution
Simply, put comment for the whole if statement.

Regards,
Mahmood

Calling multiclass_probability when mapping of decision values of binary classifier to probabilities

Hi,

I've got problem with mapping of decision values to probabilities of binary classifier (nr_class = 2).

In that case in function svm_predict_probability in L2615 in svm.cpp multiclass_probability will be called, which implements the method from this paper, and the resulting predicted probabilities (let's call them probs1) will not be the same as one just pulled the decision values through sigmoid, what one gets just by calling sigmoid_predict on decision values (let's call these probs2). Both probs1 and probs2 are probability estimates, but they are not the same, and probs2 was directly calibrated to output probabilities, so it makes more sense to output these as probability estimates for binary classifier.

Is there any reason to call multiclass_probability even when the classifier is binary (nr_class = 2)?

Thanks!

Y must be a vector or a character array

Hi I use your example code in Matlab as below

[heart_scale_label, heart_scale_inst] = libsvmread('heart_scale');
% Split Data
train_data = heart_scale_inst(1:150,:);
train_label = heart_scale_label(1:150,:);
test_data = heart_scale_inst(151:270,:);
test_label = heart_scale_label(151:270,:);

% Linear Kernel
model_linear = svmtrain(train_label, train_data, '-t 0');
[predict_label_L, accuracy_L, dec_values_L] = svmpredict(test_label, test_data, model_linear);

% Precomputed Kernel
model_precomputed = svmtrain(train_label, [(1:150)', train_data*train_data'], '-t 4');
[predict_label_P, accuracy_P, dec_values_P] = svmpredict(test_label, [(1:120)', test_data*train_data'], model_precomputed);

accuracy_L % Display the accuracy using linear kernel
accuracy_P % Display the accuracy using precomputed kernel

but in svmtrain lines it says:

Error using svmtrain (line 234)
Y must be a vector or a character array.

Can you help me?

make lib failed

OS: OS X Sierra
GNU Make 3.81

this is output message:

kent:libsvm-3.21 kent$ make lib
if [ "Darwin" = "Darwin" ]; then \
        SHARED_LIB_FLAG="-dynamiclib -Wl,-install_name,libsvm.so.2"; \
    else \
        SHARED_LIB_FLAG="-shared -Wl,-soname,libsvm.so.2"; \
    fi; \
    c++ ${SHARED_LIB_FLAG} svm.o -o libsvm.so.2

as you can see, SHARED_LIB_FLAG can not be recognized

suggest:

lib: svm.o
    @if [ "$(OS)" = "Darwin" ]; then \
        SHARED_LIB_FLAG="-dynamiclib -Wl,-install_name,libsvm.so.$(SHVER)"; \
    else \
        SHARED_LIB_FLAG="-shared -Wl,-soname,libsvm.so.$(SHVER)"; \
    fi &&\
    $(CXX) $${SHARED_LIB_FLAG} svm.o -o libsvm.so.$(SHVER)

OpenMP

Hi,
will be great to have multiprocessing support like in C++ libsvm (read libsvm FAQ 'How can I use OpenMP to parallelize LIBSVM on a multicore/shared-memory computer?'). I tried to overwrite the libsvm.dll in LIBSVM.NET package by one compiled in C++ with OpenMP but after few seconds the application crashed.

Using LIBSVM with OpenMP under Octave

Hello,
I don't have enough development experience on Linux, forgive me if I ask something obvious.

  1. I added the -fopenmp to CFLAGS and -lgomp to MEX_OPTION at /matlab/Makefile
  2. I added to /Makefile -fopenmp to CFLAGS
  3. In svm.cpp I added:
    3.1 "#pragma omp parallel for private(j)" above the line "for(j=start;j<len;j++)" (1285 line)
    3.2 "#pragma omp parallel for private(i) reduction(+:sum)" above the line "for(i=0;il;i++)" (2511 line)
    3.3 #pragma omp parallel for private(i)" above the line "for(i=0;i<l;i++)" (2528 line)
  4. I run under octave the make command

Then when I am using the svmtrain under Octave I get the error: /home/Octave/libsvm_mp/matlab/svmtrain.mex: failed to load: /home/Octave/libsvm_mp/matlab/svmtrain.mex: undefined symbol: omp_get_thread_num

What am I doing wrong here?

(Ubuntu 14.04, Octace 3.8.1, gcc 4.9, libsvm 3.20 )

Unknown parameter input ,what is happening?

I use libsvm on CentOS.
I scale ,train and make model file.
But I input some unknown parameter ,for example
1 1:0.01 2:0.32 3:-0.12 4:. 5:0.023 6:. 7:. 8.-0.02

Reslt of it scaling
1 1:0.023421 2:0.43 4:0.564 6:1.23 7:0.023

Reslt of it predicting
-1 0.0238351 0.976165

Some parameter is missing and some parameter is added on scaling.
What is happening?
Is the data trustless?

LibSVM Mex File Error

Hi Everyone,

I wanna call the libsvm function in a mex file from windows folder. I have add the mex file path but the function still got an error when i call it. The error is " Invalid MEX-file. The specified procedure could not be found.". I use matlab 2013a 64 bit and the mex file also compiled in 64 bit.

Any procedure that i missed?

Thanks anyway.

Get size or dimension of data from model

Hi,
There are svm_get_nr_sv() to get the number of data and svm_get_nr_class() to get the number of classes, but seems like there is no function to get the data dimension(how many column) of the model.
Having this function will be helpful when one load the model in a wrapper and check the external input data dimensions every time before using predict().
Is there a way to get this information easily?
Thanks.

Minor suggestion for the README file

I would kindly suggest to add that running "make" on a unix system builds three programs. Apparently, svm-scale is not in the list in the README. This is a very small change, but seems imho consistent with the style of the README file.

Attempted to read or write protected memory. This is often an indication that other memory is corrupt.

I am using https://github.com/ccerhan/LibSVMsharp as a wrapper

When i call LibSVM inside task this error happens after second task started.

If i call libsvm inside main thread, no error happens ever.

If i only start 1 task the error does not happen

As can be seen at the very first image the error is not related to my application or my functions. It is caused by either wrapper or the libSVM.dll itself i dont know which one.

I am using windows 8.1, x64, visual studio 2013, WPF .net 4.5.1 application, 32 gb ram memory on this computer

First here error message

when called

Second error message

error typ 2

Third what works and what causes error

errors 4

I really need help ty very much

svm.cpp: svm_predict() forcing one class

I am using libsvm 3.20. I have a dataset which causes svm_predict() and svm_predict_probability() to give different results. In particular, svm_predict() classifies everything to one class, which is definitely wrong for this dataset.

You can trigger it with the command line tools as follows

wget https://github.com/kousu/statasvm/raw/master/bugs/libsvm_classification/classification_bug.svmlight

svm-train -b 1 classification_bug.svmlight FIT >/dev/null &&

# svm_predict(), incorrect
svm-predict -b 0 classification_bug.svmlight FIT P
cat P

# svm_predict_probability(), correct (or at least, reasonable)
svm-predict -b 1 classification_bug.svmlight FIT P
cat P

Tabulating the values, I see

# training data
0    61
1    91
2     9
3     9

# svm_predict(), incorrect
Model supports probability estimates, but disabled in prediction.
Accuracy = 53.5294% (91/170) (classification)
    170 1

# svm_predict_probability()
Accuracy = 84.7059% (144/170) (classification)
labels 0 1 2 3
     61 0
    100 1
      9 2

The class that is incorrectly chosen is the one that is dominant in the training data, which seems telling, but I don't know enough about the mathematics of SVM to know what it is telling.

Reference dataset and full test cases are at https://github.com/kousu/statasvm/tree/master/bugs/libsvm_classification. This showed up when run from my Stata wrapper in that repo, but it is also in sklearn and in your command line tools.

I hit the svm_predict() bug a week ago, but I was even more surprised to see that despite it, you can still good answers out of libsvm by tweaking parameters. Given the huge number of machine learning projects that are dependent on your code, there must be a lot of subtlely incorrect predictions that no one is catching. Do you have any idea what would cause this?

ArrayOutOfBoundException in java v3.2

I have encountered an ArrayOutOfBoundException in version 3.2, that does not exist in v2.8.
Here is the stacktrace:
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: -1
at libsvm.Cache.get_data(svm.java:63)
at libsvm.ONE_CLASS_Q.get_Q(svm.java:1208)
at libsvm.Solver.Solve(svm.java:496)
at libsvm.svm.solve_one_class(svm.java:1422)
at libsvm.svm.svm_train_one(svm.java:1516)
at libsvm.svm.svm_train(svm.java:1959)
at LibSvmBug.svmTrain(LibSvmBug.java:95)
at LibSvmBug.train(LibSvmBug.java:59)
at LibSvmBug.main(LibSvmBug.java:38)

In v2.8 the output is the following:
*
optimization finished, #iter = 0
obj = NaN, rho = Infinity
nSV = 246, nBSV = 245

I made a sample class. Run bug.sh and working.sh which can be found here:
https://www.dropbox.com/s/gnx9a9n1293spz4/LibSvmBug.tar.gz?dl=0

With version 2.8 you will get no exception, but with version 3.2 you will get an ArrayOutOfBoundException
(Please do not care about the strange parameter choices)

I also tested some other versions. The bug also occurs in v. 2.81, 2.88, 2.91, 3.00,

Edit: Running java version "1.7.0_65"
OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3-0ubuntu0.14.04.1)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)
(Same exception on Oracle VM on Ubuntu)

Just verified this on Windows 8 using jre7 and starting from eclipse.

AttributeError: /usr/lib/libsvm.so.3: undefined symbol: svm_get_sv_indices

Hi,

I got the error in the subject line and found that there was no solution for it in downloading any ubuntu packages. There was no libsvm.so.2 on my computer, but that was the only file that fixed the issue, regardless of libsvm.so.3 being named in the error.

Is any of this indicative of a bug?

Multiple people are encountering this issue. More background and details can be found at the URL below including my answer with the solution that worked for me:

http://stackoverflow.com/questions/42050356/error-in-importing-sidekit-in-python-on-ubuntu/

Andrew

Check parameter failure on not applicable case

Using the following parameters:

    svm_parameter param;
    param.C = 100;
    param.svm_type = C_SVC;
    param.kernel_type = LINEAR;
    param.eps = 0.00001;
    param.probability = 0;
    param.shrinking = 0;
    param.cache_size = 100;

I ran svm_check_parameter to validate and it returned "degree of polynomial kernel < 0".
Since only POLY employs degree, the parameter's if-condition should also check for kernel type.
A similar issue could be applicable for gamma check just above.

'svm_check_parameter' problematic lines

The error can obviously be avoided by setting these parameters to zero, making them pass the conditions, but we shouldn't rely simply on this default value.
Adding a kernel / svm type check where these parameters are employed could avoid a few head scratches for future users of libsvm.

As proof, it just happened to me that gamma < 0 didn't raise any error while degree < 0 did.

Console output for predictions

It would be interesting, for performance reasons, that applications using svm-predict to classify single documents were able to pass the document to classify as a command line argument (instead of a file name), and that the predicted class be printed directly to the output (instead of writing it to a file). This would spare lots of useless IO operations.

What about adding a new usage:

svm-predict [options] test_document model_file

with a console output, for instance:

$ svm-predict '1:-0.14 2:0.2666667 3:0.1074111' model.svm
-1

Add Javascript binding

Hello,

as Javascript has became the glue that allows everything, have you ever thought of adding JS binding for you wonderful lib ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.