package owl

  1. Overview
  2. Docs
Legend:
Library
Module
Module type
Parameter
Class
Class type
module MX = Owl_dense_matrix.D
module UT = Owl_utils
val l1 : MX.mat -> MX.mat

L1 regularisation and its subgradient

val l1_grad : MX.mat -> MX.mat
val l2 : (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t

L2 regularisation and its grandient

val l2_grad : 'a -> 'a
val elastic : float -> MX.mat -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t

Elastic net regularisation and its gradient "a" is the weight on l1 regularisation term.

val elastic_grad : float -> MX.mat -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t
val noreg : MX.mat -> MX.mat

No regularisation and its gradient

val noreg_grad : MX.mat -> MX.mat
val square_loss : (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t -> MX.mat

least square loss function

val square_grad : MX.mat -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t
val hinge_loss : (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t -> MX.mat

hinge loss function

val hinge_grad : MX.mat -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t
val hinge2_loss : (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t

squared hinge loss function

val hinge2_grad : 'a -> 'b -> 'c -> 'd option
val softmax_loss : 'a -> 'b -> 'c option

softmax loss function

val softmax_grad : 'a -> 'b -> 'c -> 'd option

logistic loss function

val log_loss : (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t -> MX.mat
val log_grad : MX.mat -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t -> MX.mat -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t
val constant_rate : 'a -> 'b -> 'c -> float
val optimal_rate : float -> float -> int -> float
val decr_rate : float -> 'a -> int -> float
val when_stable : float -> 'a -> bool
val when_enough : float -> int -> bool
val _sgd_basic : int -> (float -> float -> int -> float) -> (float -> int -> bool) -> (MX.mat -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t) -> (MX.mat -> MX.mat -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t) -> (MX.mat -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t) -> (MX.mat -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t) -> float -> bool -> MX.mat -> MX.mat -> MX.mat -> MX.mat

Stochastic Gradient Descent (SGD) algorithm b : batch size s : step size t : stop criteria l : loss function g : gradient function of the loss function r : regularisation function o : gradient fucntion of the regularisation function a : weight on the regularisation term, common setting is 0.0001 i : whether to include intercept or not, default value is false p : model parameters (k * m), each column is a classifier. So we have m classifier of k features. x : data matrix (n x k), each row is a data point. So we have n datea points of k features each. y : labelled data (n x m), n data points and each is labeled with m classifiers

val sgd : ?b:int -> ?s:(float -> float -> int -> float) -> ?t:(float -> int -> bool) -> ?l: (MX.mat -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t) -> ?g: (MX.mat -> MX.mat -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t) -> ?r:(MX.mat -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t) -> ?o:(MX.mat -> (float, Bigarray.float64_elt) Owl_dense_matrix_generic.t) -> ?a:float -> ?i:bool -> MX.mat -> MX.mat -> MX.mat -> MX.mat

wrapper of the _sgd_basic fucntion

val gradient_descent : 'a option
val svm_regression : ?i:bool -> MX.mat -> MX.mat -> MX.mat -> MX.mat

Support Vector Machine regression i : whether to include intercept bias in parameters note that the values in y are either +1 or -1.

val ols_regression : ?i:bool -> MX.mat -> MX.mat -> MX.mat

Ordinary Least Square regression i : whether to include intercept bias in parameters

val ridge_regression : ?i:bool -> ?a:float -> MX.mat -> MX.mat -> MX.mat

Ridge regression i : whether to include intercept bias in parameters a : weight on the regularisation term TODO: how to choose a automatically

val lasso_regression : ?i:bool -> ?a:float -> MX.mat -> MX.mat -> MX.mat

Lasso regression i : whether to include intercept bias in parameters a : weight on the regularisation term TODO: how to choose a automatically

val logistic_regression : ?i:bool -> MX.mat -> MX.mat -> MX.mat

Logistic regression i : whether to include intercept bias in parameters a : weight on the regularisation term note that the values in y are either +1 or 0.

OCaml

Innovation. Community. Security.