with module Time := Core.Time
type 'a t is the type of incrementals that have a value of type
Incrementals are not covariant, i.e. we do not have
+'a t -- consider, e.g.
get_cutoff. However, if you have types
a1 is a subtype of
a2, and a value
t1 : a1 t, then the following builds an incremental value of type
let t2 : a2 t = t1 >>| fun a1 -> (a1 : a1 :> a2)
val sexp_of_t : ('a -> Ppx_sexp_conv_lib.Sexp.t) -> 'a t -> Ppx_sexp_conv_lib.Sexp.t
type 'a incremental = 'a t
include Core_kernel.Invariant.S1 with type 'a t := 'a t
val invariant : ('a -> unit) -> 'a t -> unit
val is_const : _ t -> bool
is_const t then
t is a constant-valued incremental.
is_const (const a) is true.
val is_valid : _ t -> bool
val is_necessary : _ t -> bool
val const : 'a -> 'a t
const v returns an incremental whose value never changes. It is the same as
return, but reads more clearly in many situations because it serves as a nice reminder that the incremental won't change (except possibly be invalidated).
val return : 'a -> 'a t
map t1 ~f returns an incremental
t that maintains its value as
f a, where
a is the value of
map9 are the generalizations to more arguments. If you need map<N> for some N > 9, it can easily be added, but also see
f should not create incremental nodes but this behavior is not checked; if you want to create incremental nodes, use
bind. The invalidation machinery that is used with
bind is not used with
bind t1 ~f returns an incremental
t2 that behaves like
f v, where
v is the value of
t1's value changes, then incremental applies
f to that new value and
t2 behaves like the resulting incremental.
bind can be significantly more expensive than
map during stabilization, because, when its left-hand side changes, it requires modification of the incremental DAG, while
map simply flows values along the DAG. Thus it is preferable to use
map (and its n-ary variants above) instead of
bind unless one actually needs
bind2 t1 t2 ~f is:
bind (map2 t1 t2 ~f:(fun v1 v2 -> (v1, v2))) ~f:(fun (v1, v2) -> f v1 v2)
This is equivalent to
bind t1 ~f:(fun v1 -> bind t2 ~f:(fun v2 -> f v1 v2)) but more efficient due to using one bind node rather than two. The other
bind<N> functions are generalize to more arguments.
module Infix : sig ... end
join t returns an incremental that behaves like the incremental that
t currently holds.
if_ tb ~then_ ~else_ returns an incremental
t that holds the value of
tb is true, the value of
tb is false. Note that
t only depends on one of
else_ at a time, i.e.
if_ tb ~then_ ~else is like:
bind b ~f:(fun b -> if b then then_ else else_)
which is not the same as:
map3 b then_ else_ ~f:(fun b then_ else_ -> if b then then_ else else_)
freeze ?when_ t returns an incremental whose value is
v until the first stabilization in which
when_ v holds, at which point the freeze node's value becomes constant and never changes again. Calling
freeze t forces
t to be necessary until it freezes regardless of whether the freeze node is necessary, but not thereafter (although of course
t could remain necessary for other reasons). The result of
freeze t, once frozen, will never be invalidated, even if
t is invalidated, and even if the scope in which the freeze is created is invalidated. However, prior to
when_ v becoming true,
freeze t can be invalidated.
depend_on input ~depend_on returns an
output whose value is the same as
input's value, such that
depend_on is necessary so long as
output is necessary. It is like:
map2 input depend_on ~f:(fun a _ -> a)
but with a cutoff such that
output's value only changes when
input's value changes.
necessary_if_alive input returns
output that has the same value and cutoff as
input, such that as long as
output is alive,
input is necessary.
for_all ts returns an incremental that is
true iff all
exists ts returns an incremental that is
true iff at least one of the
all ts returns an incremental whose value is a list of the values of all of the
ts. In any stabilization where any of the
ts changes, the entire list is recreated (once all of the
ts have stabilized). This essentially an
array_fold over the
Array folds and sums
array_fold ts ~init ~f creates an incremental
t whose value is:
Array.fold ts ~init ~f:(fun ac t -> f ac (value t))
In a stabilization during which any of the
ts changes, the entire fold will be computed once all of the
ts have been computed.
reduce_balanced ts ~f ~reduce does a fold-like operation over
array_fold, the operation will be computed in
O(min(n, k * log(n))) time, where
n is the size of
k is the number of elements of
ts that have changed since the last stabilization.
Generally, if most or all of
ts are changing between stabilizations, using
array_fold will have better constant factors.
reduce argument must be an associative operation:
reduce a (reduce b c) = (reduce (reduce a b) c).
None is returned upon supplying an empty array.
val unordered_array_fold : ?full_compute_every_n_changes:int -> 'a t array -> init:'b -> f:('b -> 'a -> 'b) -> f_inverse:('b -> 'a -> 'b) -> 'b t
unordered_array_fold ts ~init ~f ~f_inverse folds over the
array_fold, the fold will be computed in time proportional to the number of
ts that change rather than the number of
ts. In a stabilization, for each
ts that changes from
new_value, the value of the unordered-array fold will change from
f (f_inverse b old_value) new_value. The
t's that change may take effect in any order.
If repeated changes might accumulate error, one can cause the fold to be fully computed after every
full_compute_every_n_changes changes. If you do not supply
full_compute_every_n_changes, then full computes will never happen after the initial one.
opt_unordered_array_fold is like
unordered_array_fold, except that its result is
Some iff all its inputs are
val sum : ?full_compute_every_n_changes:int -> 'a t array -> zero:'a -> add:('a -> 'a -> 'a) -> sub:('a -> 'a -> 'a) -> 'a t
sum ts ~zero ~add ~sub ?full_compute_every_n_changes returns an incremental that maintains the sum of the
ts. It uses
unordered_array_fold so that the work required to maintain the sum is proportional to the number of
ts that change (i.e. one
sub and one
add per change).
opt_sum is like
sum, except that its result is
Some iff all its inputs are
sum_float ts is:
sum ts ~zero:0.0 ~add:(+.) ~sub:(-.) ~full_compute_every_n_changes:(Array.length ts)
sum for fast update, with a full recompute every
length ts changes to cut down on floating-point error.
module Var : sig ... end
module Observer : sig ... end
An observer lets one get the value of an incremental, either by asking directly for the value or installing an on-update handler to run when the incremental's value changes.
val observe : ?should_finalize:bool -> 'a t -> 'a Observer.t
observe t returns a new observer for
observe raises if called during stabilization.
By default, an observer has a finalizer that calls
disallow_future_use when the observer is no longer referenced. One can use
~should_finalize:false to cause the finalizer to not be created, in which case the observer will live until
disallow_future_use is explicitly called.
module Update : sig ... end
on_update t ~f is similar to
Observer.on_update_exn, but it does not cause
t to be necessary. Instead of the
Initialized update, there are updates for when a node becomes
Unnecessary. Here is a state diagram for the allowable sequences of
Update.t's that can be supplied to a particular
stabilize () recomputes all incrementals that are necessary and stale. I.e. it propagates changes from variables that have been set to the necessary incrementals that depend on them, stopping propagation as per cutoffs.
module Cutoff : sig ... end
'a Cutoff.t is a function that returns
true if propagation of changes should be cutoff at a node based on the old value and the (possible) new value of the node.
set_cutoff t cutoff replaces the current cutoff function for
cutoff will be called any time
t is recomputed, with
old_value being the value of
t before the recomputation and
new_value being the value that just recomputed. If
cutoff ~old_value ~new_value, then
t's value will remain as
new_value is discarded) and anything depending on
t will not be recomputed (at least not because of
not (cutoff ~old_value ~new_value), then
t's value will become
new_value, and all nodes depending on
t will recomputed.
A reasonable choice for
cutoff is an equality function on
The default cutoff for every node is
phys_equal. For example, this means that a
unit incremental would only fire once; to disable this, use
module Scope : sig ... end
val lazy_from_fun : (unit -> 'a) -> 'a Core_kernel.Lazy.t
lazy_from_fun f is like
Lazy.from_fun f, except that the nodes created by
f will be created in the scope in which
lazy_from_fun was called, rather than in the scope of the piece of code that first forces the resulting lazy. Not using this function when defining lazy values is likely to result in exceptions being thrown by incremental. As a rule of thumb, all
lazy e that might create incremental nodes should be replaced by
lazy_from_fun (fun () -> e).
As usual with
f raises, then that exception will be raised when calling
memoize_fun f hashable returns a function
m that is a memoized version of
f that will run
f a on each distinct
m is applied to, memoize the result (in a hash table), and thereafter for
m will return the memoized result.
m is called, it uses
Scope.within to run
f in the scope that was in effect when
memoize_fun f was called. This is essential to correctly capture the dependence of nodes that
f creates on values that
f is closed over, which may in turn depend on the left-hand sides of binds in the scope in effect when
f was called. Furthermore, nodes that
f creates do not depend on the scope in effect when
m is called.
memoize_fun_by_key is a generalization that allows one to memoize over values that contain a uniquely identifying key, but also have other data.
val memoize_fun : ?initial_size:int -> 'a Core_kernel.Hashtbl.Hashable.t -> ('a -> 'b) -> ('a -> 'b) Core_kernel.Staged.t
val memoize_fun_by_key : ?initial_size:int -> 'key Core_kernel.Hashtbl.Hashable.t -> ('a -> 'key) -> ('a -> 'b) -> ('a -> 'b) Core_kernel.Staged.t
val user_info : _ t -> Core_kernel.Info.t option
For debugging purposes, one can store an arbitrary
Info.t in a node. This will be displayed as part of a node in error messages.
val set_user_info : _ t -> Core_kernel.Info.t option -> unit
module Expert : sig ... end
A low-level, experimental interface to incremental. This is useful when you need more control over the dependency graph, for performance reasons. It comes at the cost that it's much harder to use right. Specifically, here is what you can do with an expert node:
module State : sig ... end
module Packed : sig ... end
save_dot file outputs to
file the DAG of all necessary nodes, in dot format.
val keep_node_creation_backtrace : bool Core_kernel.ref
keep_node_creation_backtrace, then whenever a new node is created, incremental will call
Backtrace.get and store the result in the node. The backtrace will then appear in subsequent error messages when the node is pretty printed.
module Let_syntax : sig ... end
Let_syntax allows you to write expressions like
Incremental has a timing-wheel-based clock, and lets one build incremental values that change as its time passes. One must explicitly call
advance_clock to change incremental's clock; there is no implicit call based on the passage of time.
val alarm_precision : Core_kernel.Int63.t
alarm_precision of the underlying timing wheel.
val now : unit -> Core.Time.t
now t returns the current time of incremental's clock.
val watch_now : unit -> Core.Time.t t
watch_now t returns an incremental that tracks the current time.
val advance_clock : to_:Core.Time.t -> unit
advance_clock t ~to_ moves incremental's clock forward to
advance_clock raises if
to_ < now t. As with
Var.set, the effect of
advance_clock is not seen on incremental values until the next stabilization. Unlike
advance_clock during stabilization raises.
In certain pathological cases,
advance_clock can raise due to it detecting a cycle in the incremental graph.
module Before_or_after : sig ... end
at time returns an incremental that is
now () <= time and
now () >= time + alarm_precision. When
now () is between
time + alarm_precision,
at time might be
After, due to the fundamental imprecision of the timing wheel. One is guaranteed that an
at never becomes
After too early, but it may become
After up to
val at : Core.Time.t -> Before_or_after.t t
val after : Core.Time.Span.t -> Before_or_after.t t
val at_intervals : Core.Time.Span.t -> unit t
at_intervals interval returns an incremental whose value changes at time intervals of the form:
Time.next_multiple ~base ~after ~interval
now () when
at_intervals was called and
after is the current
now (). As with
at_intervals might fire up to
at_intervals raises if
interval < alarm_precision. The
unit t that
at_intervals returns has its cutoff set to
Cutoff.never, so that although its value is always
(), incrementals that depend on it will refire each time it is set. The result of
at_intervals remains alive and is updated until the left-hand side of its defining bind changes, at which point it becomes invalid.
val step_function : init:'a -> (Core.Time.t * 'a) list -> 'a t
step_function ~init [(t1, v1); ...; (tn, vn)] returns an incremental whose initial value is
init and takes on the values
vn in sequence taking on the value
vi when the clock's time passes
ti. As with
at, the steps might take effect up to
It is possible for
vi to be skipped if time advances from
t(i-1) to some time greater than
The times must be in nondecreasing order, i.e.
step_function raises if for some
ti > tj.
val snapshot : 'a t -> at:Core.Time.t -> before:'a -> 'a t Core_kernel.Or_error.t
snapshot value_at ~at ~before returns an incremental whose value is
at and whose value is frozen to the value of
value_at during the first stabilization after which the time passes
value_at to be necessary during the first stabilization after which time passes
at even if the
snapshot node itself is not necessary, but not thereafter (although of course
value_at could remain necessary for other reasons). The result of
snapshot will only be invalidated if
value_at is invalid at the moment of the snapshot.
at < now (), because it is impossible to take the snapshot because the time has already passed.
The weak versions of the memoization functions use a
Weak_hashtbl for the memo table. This keeps a weak pointer to each result, and so the garbage collector automatically removes unused results. Furthermore,
stabilize removes the table entries whose result is unused.
val weak_memoize_fun : ?initial_size:int -> 'a Core.Hashtbl.Hashable.t -> ('a -> 'b Core.Heap_block.t) -> ('a -> 'b Core.Heap_block.t) Core.Staged.t
val weak_memoize_fun_by_key : ?initial_size:int -> 'key Core.Hashtbl.Hashable.t -> ('a -> 'key) -> ('a -> 'b Core.Heap_block.t) -> ('a -> 'b Core.Heap_block.t) Core.Staged.t