package neural_nets_lib

  1. Overview
  2. Docs

Computational primitives for neural networks, integrating Tensor with Assignments.

module Asgns = Arrayjit.Assignments
module Idx = Arrayjit.Indexing
module At : sig ... end
module Initial_NTDSL : sig ... end
module Initial_TDSL : sig ... end
val add : ?label:Base.string list -> ?grad_spec:Tensor.grad_spec -> Tensor.t -> Tensor.t -> Tensor.t
val sub : ?label:Base.string list -> ?grad_spec:Tensor.grad_spec -> Tensor.t -> Tensor.t -> Tensor.t
val mul : Shape.compose_type -> op_asn: (v:Tensor.tn -> t1:Tensor.t -> t2:Tensor.t -> projections:Tensor.projections Base.Lazy.t -> Tensor.asgns) -> label:Base.string Base.list -> ?grad_spec:Tensor.grad_spec -> Tensor.t -> Tensor.t -> Tensor.t
val pointmul : ?label:Base.string list -> ?grad_spec:Tensor.grad_spec -> Tensor.t -> Tensor.t -> Tensor.t
val matmul : ?label:Base.string list -> ?grad_spec:Tensor.grad_spec -> Tensor.t -> Tensor.t -> Tensor.t
val einsum : ?label:Base.string list -> Base.string -> ?grad_spec:Tensor.grad_spec -> Tensor.t -> Tensor.t -> Tensor.t

Similar to the explicit mode of numpy.einsum, the binary variant. Can compute various forms of matrix multiplication, inner and outer products, etc.

Note that "a,b->c" from numpy is "a;b=>c" in OCANNL, since "->" is used to separate the input and the output axes.

val outer_sum : ?label:Base.string list -> Base.string -> ?grad_spec:Tensor.grad_spec -> Tensor.t -> Tensor.t -> Tensor.t

Like einsum, but adds instead than multiplying the resulting values.

val einsum1 : ?label:Base.string list -> Base.string -> ?grad_spec:Tensor.grad_spec -> Tensor.t -> Tensor.t

Similar to the explicit mode of numpy.einsum, the unary variant. Can permute axes, extract diagonals, compute traces etc.

Note that "a->c" from numpy is "a=>c" in OCANNL, since "->" is used to separate the input and the output axes.

val relu : ?label:Base.string list -> ?grad_spec:Tensor.grad_spec -> Tensor.t -> Tensor.t
module NDO_without_pow : sig ... end
val pointpow : ?label:Base.string Base.list -> grad_spec:Tensor.grad_spec -> Base.float -> Tensor.t -> Tensor.t
module NDO_without_div : sig ... end
val pointdiv : ?label:Base.string Base.list -> grad_spec:Tensor.grad_spec -> Tensor.t -> Tensor.t -> Tensor.t
val range : ?label:Base.string list -> ?grad_spec:Tensor.grad_spec -> ?axis_label:Base.string -> Base.Int.t -> Tensor.t
val range_of_shape : ?label:Base.string list -> ?grad_spec:Tensor.grad_spec -> ?batch_dims:Base.Int.t Base.List.t -> ?input_dims:Base.Int.t Base.List.t -> ?output_dims:Base.Int.t Base.List.t -> ?batch_axes:(Base.string * Base.Int.t) Base.List.t -> ?input_axes:(Base.string * Base.Int.t) Base.List.t -> ?output_axes:(Base.string * Base.Int.t) Base.List.t -> unit -> Tensor.t
val stop_gradient : ?label:Base.string list -> Tensor.t -> Tensor.t

A stop_gradient is an identity in the forward pass and a no-op in the backprop pass.

val slice : ?label:Base.string list -> grad_spec:Tensor.grad_spec -> Idx.static_symbol -> Tensor.t -> Tensor.t
val embed_symbol : ?label:Base.string list -> Arrayjit.Indexing.static_symbol -> Tensor.t
module DO : sig ... end
module NDO : sig ... end
module TDSL : sig ... end
module NTDSL : sig ... end
OCaml

Innovation. Community. Security.