#### torch

**ocaml-torch** provides some ocaml bindings for the PyTorch tensor library.

This brings to OCaml NumPy-like tensor computations with GPU acceleration and tape-based automatic

differentiation.

These bindings use the PyTorch C++ API and are

mostly automatically generated. The current GitHub tip and the opam package v0.7

corresponds to PyTorch **v1.10.0**.

On Linux note that you will need the PyTorch version using the cxx11 abi

cpu version,

cuda 10.2 version.

### Opam Installation

The opam package can be installed using the following command.

This automatically installs the CPU version of libtorch.

```
opam install torch
```

You can then compile some sample code, see some instructions below.**ocaml-torch** can also be used in interactive mode via

utop or

ocaml-jupyter.

Here is a sample utop session.

#### Build a Simple Example

To build a first torch program, create a file `example.ml`

with the

following content.

```
open Torch
let () =
let tensor = Tensor.randn [ 4; 2 ] in
Tensor.print tensor
```

Then create a `dune`

file with the following content:

```
(executables
(names example)
(libraries torch))
```

Run `dune exec example.exe`

to compile the program and run it!

Alternatively you can first compile the code via `dune build example.exe`

then run the executable`_build/default/example.exe`

(note that building the bytecode target `example.bc`

may

not work on macos).

### Tutorials

### Examples

Below is an example of a linear model trained on the MNIST dataset (full

code).

```
(* Create two tensors to store model weights. *)
let ws = Tensor.zeros [image_dim; label_count] ~requires_grad:true in
let bs = Tensor.zeros [label_count] ~requires_grad:true in
let model xs = Tensor.(mm xs ws + bs) in
for index = 1 to 100 do
(* Compute the cross-entropy loss. *)
let loss =
Tensor.cross_entropy_for_logits (model train_images) ~targets:train_labels
in
Tensor.backward loss;
(* Apply gradient descent, disable gradient tracking for these. *)
Tensor.(no_grad (fun () ->
ws -= grad ws * f learning_rate;
bs -= grad bs * f learning_rate));
(* Compute the validation error. *)
let test_accuracy =
Tensor.(argmax (model test_images) = test_labels)
|> Tensor.to_kind ~kind:(T Float)
|> Tensor.sum
|> Tensor.float_value
|> fun sum -> sum /. test_samples
in
printf "%d %f %.2f%%\n%!" index (Tensor.float_value loss) (100. *. test_accuracy);
done
```

A simplified version of

char-rnn

illustrating character level language modeling using Recurrent Neural Networks.Neural Style Transfer

applies the style of an image to the content of another image. This uses some deep Convolutional Neural Network.

### Models and Weights

Various pre-trained computer vision models are implemented in the

vision library.

The weight files can be downloaded at the following links:

ResNet-18 weights.

ResNet-34 weights.

ResNet-50 weights.

ResNet-101 weights.

ResNet-152 weights.

DenseNet-121 weights.

DenseNet-161 weights.

DenseNet-169 weights.

SqueezeNet 1.0 weights.

SqueezeNet 1.1 weights.

VGG-13 weights.

VGG-16 weights.

AlexNet weights.

Inception-v3 weights.

MobileNet-v2 weights.

EfficientNet

b0 weights,

b1 weights,

b2 weights,

b3 weights,

b4 weights.

Running the pre-trained models on some sample images can the easily be done via the following commands.

```
dune exec examples/pretrained/predict.exe path/to/resnet18.ot tiger.jpg
```

Natural Language Processing models based on BERT can be found in the

ocaml-torch repo.

### Alternative Installation Option

This alternative way to install **ocaml-torch** could be useful to run with GPU

acceleration enabled.

The libtorch library can be downloaded from the PyTorch

website (1.10.0 cpu

version).

Download and extract the libtorch library then to build all the examples run:

```
export LIBTORCH=/path/to/libtorch
git clone https://github.com/LaurentMazare/ocaml-torch.git
cd ocaml-torch
make all
```

md5=7a712ae0e8c7f5452f628377d80a5bb4

sha512=22314b655bc6b5e5c970cbab8d132eae36ee0b8fb0a96b63727899442eb70fe00bd1895d7cc718a85b58bc2b2b4ea6820fa288a19346f095e5de18f7e47c2d02

>= "4.08"

>= "1.10.0" & < "1.11.0"

>= "1.3.0"

>= "0.11"

>= "v0.11.0"