package owl

  1. Overview
  2. Docs

Machine learning library Note: C layout row-major matrix

module MX = Owl_dense_real
module UT = Owl_utils
val kmeans : MX.mat -> int -> MX.mat * int array

K-means clustering algorithm x is the row-based data points and c is the number of clusters.

val numerical_gradient : ('a -> MX.mat -> MX.mat) -> MX.mat -> MX.mat -> 'a -> 'b -> MX.mat

a numberical way of calculating gradient. x is a k x m matrix containing m classifiers of k features.

val l1 : MX.mat -> MX.mat

L1 regularisation and its subgradient

val l1_grad : MX.mat -> MX.mat
val l2 : MX.mat -> MX.mat

L2 regularisation and its grandient

val l2_grad : 'a -> 'a
val elastic : float -> MX.mat -> MX.mat

Elastic net regularisation and its gradient "a" is the weight on l1 regularisation term.

val elastic_grad : float -> MX.mat -> MX.mat
val noreg : MX.mat -> MX.mat

No regularisation and its gradient

val noreg_grad : MX.mat -> MX.mat
val square_loss : MX.mat -> MX.mat -> MX.mat

least square loss function

val square_grad : MX.mat -> MX.mat -> MX.mat -> MX.mat
val hinge_loss : MX.mat -> MX.mat -> MX.mat

hinge loss function

val hinge_grad : MX.mat -> MX.mat -> MX.mat -> MX.mat
val hinge2_loss : MX.mat -> MX.mat -> MX.mat

squared hinge loss function

val hinge2_grad : 'a -> 'b -> 'c -> 'd option
val softmax_loss : 'a -> 'b -> 'c option

softmax loss function

val softmax_grad : 'a -> 'b -> 'c -> 'd option

logistic loss function

val log_loss : MX.mat -> MX.mat -> MX.mat
val log_grad : MX.mat -> MX.mat -> MX.mat -> MX.mat
val constant_rate : 'a -> 'b -> 'c -> float
val optimal_rate : float -> float -> int -> float
val decr_rate : float -> 'a -> int -> float
val when_stable : float -> 'a -> bool
val when_enough : float -> int -> bool
val _sgd_basic : int -> (float -> float -> int -> float) -> (float -> int -> bool) -> (MX.mat -> MX.mat -> MX.mat) -> (MX.mat -> MX.mat -> MX.mat -> MX.mat) -> (MX.mat -> MX.mat) -> (MX.mat -> MX.mat) -> float -> bool -> MX.mat -> MX.mat -> MX.mat -> MX.mat

Stochastic Gradient Descent (SGD) algorithm b : batch size s : step size t : stop criteria l : loss function g : gradient function of the loss function r : regularisation function o : gradient fucntion of the regularisation function a : weight on the regularisation term, common setting is 0.0001 i : whether to include intercept or not, default value is false p : model parameters (k * m), each column is a classifier. So we have m classifier of k features. x : data matrix (n x k), each row is a data point. So we have n datea points of k features each. y : labeled data (n x m), n data points and each is labeled with m classifiers

val sgd : ?b:int -> ?s:(float -> float -> int -> float) -> ?t:(float -> int -> bool) -> ?l:(MX.mat -> MX.mat -> MX.mat) -> ?g:(MX.mat -> MX.mat -> MX.mat -> MX.mat) -> ?r:(MX.mat -> MX.mat) -> ?o:(MX.mat -> MX.mat) -> ?a:float -> ?i:bool -> MX.mat -> MX.mat -> MX.mat -> MX.mat

wrapper of the _sgd_basic fucntion

val gradient_descent : 'a option
val svm_regression : ?i:bool -> MX.mat -> MX.mat -> MX.mat -> MX.mat

Support Vector Machine regression i : whether to include intercept bias in parameters note that the values in y are either +1 or -1.

val ols_regression : ?i:bool -> MX.mat -> MX.mat -> MX.mat

Ordinary Least Square regression i : whether to include intercept bias in parameters

val ridge_regression : ?i:bool -> ?a:float -> MX.mat -> MX.mat -> MX.mat

Ridge regression i : whether to include intercept bias in parameters a : weight on the regularisation term TODO: how to choose a automatically

val lasso_regression : ?i:bool -> ?a:float -> MX.mat -> MX.mat -> MX.mat

Lasso regression i : whether to include intercept bias in parameters a : weight on the regularisation term TODO: how to choose a automatically

val logistic_regression : ?i:bool -> MX.mat -> MX.mat -> MX.mat

Logistic regression i : whether to include intercept bias in parameters a : weight on the regularisation term note that the values in y are either +1 or 0.