Recall that

Encoding and decoding work together. But they work only in batch mode: encoding computes a fraction, and yields nothing until the last step, and so decoding cannot start until encoding has finished. We really want encoding to yield as the encoded text a list of bits representing the fraction, rather than the fraction itself, so that we can stream the encoded text and the decoding process. To this end, we replace by , where

The obvious definitions have yield the shortest binary expansion of any fraction within , and evaluate this binary expansion. However, we don’t do quite this—it turns out to prevent the streaming condition from holding—and instead arrange for to yield the bit sequence that when extended with a 1 yields the shortest expansion of any fraction within (and indeed, the shortest binary expansion necessarily ends with a 1), and compute the value with this 1 appended.

Thus, if then the binary expansion of any fraction within starts with 0; and similarly, if , the binary expansion starts with 1. Otherwise, the interval straddles ; the shortest binary expansion within is it the expansion of , so we yield the empty bit sequence.

Note that is a hylomorphism, so we have

Moreover, it is clear that yields a finite bit sequence for any non-empty interval (since the interval doubles in width at each step, and the process stops when it includes ); so this equation serves to uniquely define . In other words, is a *recursive coalgebra*. Then it is a straightforward exercise to prove that ; so although and differ, they are sufficiently similar for our purposes.

Now we redefine encoding to yield a bit sequence rather than a fraction, and decoding correspondingly to consume that bit sequence:

That is, we move the part of from the encoding stage to the decoding stage.

Just like , the new version of encoding consumes all of its input before producing any output, so does not work for encoding infinite inputs, nor for streaming execution even on finite inputs. However, it is nearly in the right form to be a *metamorphism*—a change of representation from lists of symbols to lists of bits. In particular, is associative, and is its unit, so we can replace the with a :

Now that is in the right form, we must check the streaming condition for and . We consider one of the two cases in which is productive, and leave the other as an exercise. When , and assuming , we have:

as required. The last step is a kind of associativity property:

whose proof is left as another exercise. Therefore the streaming condition holds, and we may fuse the with the , defining

which streams the encoding process: the initial bits are output as soon as they are fully determined, even before all the input has been read. Note that and differ, in particular on infinite inputs (the former diverges, whereas the latter does not); but they coincide on finite symbol sequences.

Similarly, we want to be able to stream decoding, so that we don’t have to wait for the entire encoded text to arrive before starting decoding. Recall that we have so far

where is an and a . The first obstacle to streaming is that , which we need to be a instead. We have

Of course, is not associative—it doesn’t even have the right type for that. But we can view each bit in the input as a function on the unit interval: bit~0 is represented by the function that focusses into the lower half of the unit interval, and bit~1 by the function that focusses into the upper half. The fold itself composes a sequence of such functions; and since function composition is associative, this can be written equally well as a or a . Having assembled the individual focussers into one composite function, we finally apply it to . (This is in fact an instance of a general trick for turning a into a , or vice versa.) Thus, we have:

where yields either the lower or the upper half of the unit interval:

In fact, not only may the individual bits be seen as focussing functions and on the unit interval, so too may compositions of such functions:

So any such composition is of the form for some interval , and we may represent it concretely by itself, and retrieve the function via :

So we now have

This is almost in the form of a metamorphism, except for the occurrence of the adapter in between the unfold and the fold. It is not straightforward to fuse that adapter with either the fold or the unfold; fortunately, however, we can split it into the composition

of two parts, where

in such a way that the first part fuses with the fold and the second part fuses with the unfold. For fusing the first half of the adapter with the fold, we just need to carry around the additional value with the interval being focussed:

where

For fusing the second half of the adapter with the unfold, let us check the fusion condition. We have (exercise!):

where the is the functorial action for the base functor of the datatype, applying just to the second component of the optional pair. We therefore define

and have

and therefore

Note that the right-hand side will eventually lead to intervals that exceed the unit interval. When , it follows that ; but the unfolding process keeps widening the interval without bound, so it will necessarily eventually exceed the unit bounds. We return to this point shortly.

We have therefore concluded that

Now we need to check the streaming condition for and . Unfortunately, this is never going to hold: is always productive, so will only take production steps and never consume any input. The problem is that is too aggressive, and we need to use the more cautious flushing version of streaming instead. Informally, the streaming process should be productive from a given state only when the whole of interval maps to the same symbol in model , so that however is focussed by subsequent inputs, that symbol cannot be invalidated.

More formally, note that

where

and

That is, the interval is “safe” for model if it is fully included in the encoding of some symbol ; then all elements of decode to . Then, and only then, we may commit to outputting , because no further input bits could lead to a different first output symbol.

Note now that the interval remains bounded by unit interval during the streaming phase, because of the safety check in , although it will still exceed the unit interval during the flushing phase. However, at this point we can undo the fusion we performed earlier, “fissioning” into again: this manipulates rationals rather than intervals, so there is no problem with intervals getting too wide. We therefore have:

Now let us check the streaming condition for and the more cautious . Suppose that is a productive state, so that holds, that is, all of interval is mapped to the same symbol in , and let

so that . Consuming the next input leads to state . This too is a productive state, because for any , and so the whole of the focussed interval is also mapped to the same symbol in the model. In particular, the midpoint of is within interval , and so the first symbol produced from the state after consumption coincides with the symbol produced from the state before consumption. That is,

as required. We can therefore rewrite decoding as a flushing stream computation:

That is, initial symbols are output as soon as they are completely determined, even before all the input bits have been read. This agrees with on finite bit sequences.

We will leave arithmetic coding at this point. There is actually still quite a bit more arithmetic required—in particular, for competitive performance it is important to use only *fixed-precision* arithmetic, restricting attention to rationals within the unit interval with denominator for some fixed~. In order to be able to multiply two numerators using 32-bit integer arithmetic without the risk of overflow, we can have at most . Interval narrowing now needs to be *approximate*, rounding down both endpoints to integer multiples of . Care needs to be taken so that this rounding never makes the two endpoints of an interval coincide. Still, encoding can be written as an instance of . Decoding appears to be more difficult: the approximate arithmetic means that we no longer have interval widening as an exact inverse of narrowing, so the approach above no longer works. Instead, our 2002 lecture notes introduce a “destreaming” operator that simulates and inverts streaming: the decoder works in sympathy with the encoder, performing essentially the same interval arithmetic but doing the opposite conversions. Perhaps I will return to complete that story some time…

]]>

The basic idea behind arithmetic coding is essentially to encode an input text as a subinterval of the unit interval, based on a *model* of the text symbols that assigns them to a partition of the unit interval into non-empty subintervals. For the purposes of this post, we will deal mostly with half-open intervals, so that the interval contains values such that , where are rationals.

For example, with just two symbols “a” and “b”, and a static model partitioning the unit interval into for “a” and for “b”, the symbols in the input text “aba” successively narrow the unit interval to , and the latter interval is the encoding of the whole input. And in fact, it suffices to pick any single value in this final interval, as long as there is some other way to determine the end of the encoded text (such as the length, or a special end-of-text symbol).

We introduce the following basic definitions for intervals:

We’ll write “” for , and “” for .

A crucial operation on intervals is *narrowing* of one interval by another, where is to as is to the unit interval:

We’ll write “” for . Thus, is “proportionately of the way between and “, and we have

Conversely, we can *widen* one interval by another:

We’ll write “” for . Note that is inverse to , in the sense

and consequently widening is inverse to narrowing:

We work with inputs consisting of sequences of symbols, which might be characters or some higher-level tokens:

The type then must provide the following operations:

- a way to look up a symbol, obtaining the corresponding interval:

- conversely, a way to decode a value, retrieving a symbol:

- an initial model:

- a means to
*adapt*the model on seeing a new symbol:

The central property is that encoding and decoding are inverses, in the following sense:

There are no requirements on and , beyond the latter being a total function.

For example, we might support adaptive coding via a model that counts the occurrences seen so far of each of the symbols, represented as a histogram:

This naive implementation works well enough for small alphabets. One might maintain the histogram in decreasing order of counts, so that the most likely symbols are at the front and are therefore found quickest. For larger alphabets, it is better to maintain the histogram as a binary search tree, ordered alphabetically by symbol, and caching the total counts of every subtree.

Now encoding is straightforward to define. The function takes an initial model and a list of symbols, and returns the list of intervals obtained by looking up each symbol in turn, adapting the model at each step:

That is,

We then narrow the unit interval by each of these subintervals, and pick a single value from the resulting interval:

All we require of is that ; then yields a fraction in the unit interval. For example, we might set , where

So much for encoding; how do we retrieve the input text? In fact, we can retrieve the first symbol simply by using . Expanding the encoding of a non-empty text, we have:

The proof obligation, left as an exercise, is to show that

which holds when is of the form for some .

Now

and indeed, encoding yields a fraction in the unit interval, so this recovers the first symbol correctly. This is the foothold that allows the decoding process to make progress; having obtained the first symbol using , it can adapt the model in precisely the same way that the encoding process does, then retrieve the second symbol using that adapted model, and so on. The only slightly tricky part is that when decoding an initial value , having obtained the first symbol , decoding should continue on some modified value ; what should the modification be? It turns out that the right thing to do is to scale by the interval associated in the model with symbol , since scaling is the inverse operation to the s that take place during encoding. That is, we define:

(Of course, , by the inverse requirement on models, and so the new scaled value is again within the unit interval.)

Note that decoding yields an infinite list of symbols; the function is always productive. Nevertheless, that infinite list starts with the encoded text, as we shall now verify. Define the round-trip function

Then we have:

From this it follows that indeed the round-trip recovers the initial text, in the sense that yields an infinite sequence that starts with ; in fact,

yielding the original input followed by some junk, the latter obtained by decoding the fraction (the encoding of ) from the final model that results from adapting the initial model to each symbol in in turn. To actually retrieve the input text with no junk suffix, one could transmit the length separately (although that doesn’t sit well with streaming), or append a distinguished end-of-text symbol.

So far we have an encoder and a decoder, and a proof that the decoder successfully decodes the encoded text. In the next post, we’ll see how to reimplement both as streaming processes.

]]>

So for example, we can convert an infinite fraction in base 3 to one in base 7 with

where

In this post, we’ll see another number conversion problem, which will deliver the digits of . For more details, see my paperâ€”although the presentation here is simpler now.

Leibniz showed that

From this, using Euler’s convergence-accelerating transformation, one may derive

or equivalently

This can be seen as the number in a funny mixed-radix base , just as the usual decimal expansion

is represented by the number in the fixed-radix base . Computing the decimal digits of is then a matter of conversion from the mixed-radix base to the fixed-radix base.

Let’s remind ourselves of how it should work, using a simpler example: conversion from one fixed base to another. We are given an infinite-precision fraction in the unit interval

in base , in which for each digit . We are to convert it to a similar representation

in base , in which for each output digit . The streaming process maintains a state , a pair of rationals; the invariant is that after consuming input digits and producing output digits, we have

so that represents a linear function that should be applied to the value represented by the remaining input.

We can initialize the process with . At each step, we first try to produce another output digit. The remaining input digits represent a value in the unit interval; so if and have the same integer part, then that must be the next output digit, whatever the remaining input digits are. Let be that integer. Now we need to find such that

for any remainder ; then we can increment and set to and the invariant is maintained. A little algebra shows that we should take and .

If and have different integer parts, we cannot yet tell what the next output digit should be, so we must consume the next input digit instead. Now we need to find such that

for any remainder ; then we can increment and set to and the invariant is again maintained. Again, algebraic manipulation leads us to and .

For example, , and the conversion starts as follows:

That is, the initial state is . This state does not yet determine the first output digit, so we consume the first input digit 0 to yield the next state . This state still does not determine the first output, and nor will the next; so we consume the next two input digits 2 and 0, yielding state . This state does determine the next digit: and both start with a 1 in base 7. So we can produce a 1 as the first output digit, yielding state . And so on.

The process tends to converge. Each production step widens the non-empty window by a factor of , so it will eventually contain multiple integers; therefore we cannot produce indefinitely. Each consumption step narrows the window by a factor of , so it will tend towards eventually producing the next output digit. However, this doesn’t always work. For example, consider converting to base 3:

The first output digit is never determined: if the first non-3 in the input is less than 3, the value is less than a third, and the first output digit should be a 0; if the first non-3 is greater than 3, then the value is definitely greater than a third, and it is safe to produce a 1 as the first output digit; but because the input is all 3s, we never get to make this decision. This problem will happen whenever the value being represented has a finite representation in the output base.

Let’s return now to computing the digits of . We have the input

which we want to convert to decimal. The streaming process maintains a pair of rationalsâ€”but this time representing the linear function , since this time our expression starts with a sum rather than a product. The invariant is similar: after consuming input digits and producing output digits, we have

Note that the output base is fixed at 10; but more importantly, the input *digits* are all fixed at 2, and it is the input *base* that varies from digit to digit.

We can initialize the process with . At each step, we first try to produce an output digit. What value might the remaining input

represent? Each of the bases is at least , so it is clear that , where

which has unique solution . Similarly, each of the bases is less than , so it is clear that , where

which has unique solution . So we consider the bounds and ; if these have the same integer part , then that is the next output digit. Now we need to find such that

for any remainder , so we pick and . Then we can increment and set to , and the invariant is maintained.

If the two bounds have different integer parts, we must consume the next input digit instead. Now we need to find such that

for all , so we pick and . Then we can increment and set to , and again the invariant is maintained.

The conversion starts as follows:

Happily, non-termination ceases to be a problem: the value being represented does not have a finite representation in the output base, being irrational.

We can plug these definitions straight into the function above:

where

and

(The s make rational numbers in Haskell, and force the ambiguous fractional type to be rather than .)

In fact, this program can be considerably simplified, by inlining the definitions. In particular, the input digits are all 2, so we need not supply them. Moreover, the component of the state is never used, because we treat each output digit in the same way (in contrast to the input digits); so that may be eliminated. Finally, we can eliminate some of the numeric coercions if we represent the component as a rational in the first place:

Then we have

]]>

I don’t think I’ve written about them yet in this seriesâ€”another story, for another dayâ€”but *hylomorphisms* consist of a fold after an unfold. One very simple example is the factorial function: is the product of the predecessors of . The predecessors can be computed with an unfold:

and the product as a fold:

and then factorial is their composition:

Another example is a tree-based sorting algorithm that resembles Hoare’s quicksort: from the input list, grow a binary search tree, as an unfold, and then flatten that tree back to a sorted list, as a fold. This is a divide-and-conquer algorithm; in general, these can be modelled as unfolding a tree of subproblems by repeatedly dividing the problem, then collecting the solution to the original problem by folding together the solutions to subproblems.

This post is about the opposite composition, an unfold after a fold. Some examples:

- to reformat a list of lists to a given length;
- to sort a list;
- to convert a fraction from base to base ;
- to encode a text in binary by “arithmetic coding”.

In each of these cases, the first phase is a fold, which consumes some structured representation of a value into an intermediate unstructured format, and the second phase is an unfold, which generates a new structured representation. Their composition effects a change of representation, so we call them metamorphisms.

Hylomorphisms always *fuse*, and one can deforest the intermediate *virtual data structure*. For example, one need not construct the intermediate list in the factorial function; since each cell gets constructed in the unfold only to be immediately deconstructed in the fold, one can cut to the chase and go straight to the familiar recursive definition. For the base case, we have:

and for non-zero argument , we have:

In contrast, metamorphisms only fuse under certain conditions. However, when they do fuse, they also allow infinite representations to be processed, as we shall see.

Fusion seems to depend on the fold being tail-recursive; that is, a :

For the unfold phase, we will use the usual list unfold:

We define a metamorphism as their composition:

This transforms input of type to output of type : in the first phase, , it consumes all the input into an intermediate value of type ; in the second phase, , it produces all the output.

Under certain conditions, it is possible to fuse these two phasesâ€”this time, not in order to eliminate an intermediate data structure (after all, the intermediate type need not be structured), but rather in order to allow some production steps to happen before all the consumption steps are complete.

To that end, we define the function as follows:

This takes the same arguments as . It maintains a current state , and produces an output element when it can; and when it can’t produce, it consumes an input element instead. In more detail, it examines the current state using function , which is like the body of an unfold; this may produce a first element of the result and a new state ; when it yields no element, the next element of the input is consumed using function , which is like the body of a ; and when no input remains either, we are done.

The *streaming condition* for and is that

Consider a state from which the body of the unfold is productive, yielding some . From here we have two choices: we can either produce the output , move to intermediate state , then consume the next input to yield a final state ; or we can consume first to get the intermediate state , and again try to produce. The streaming condition says that this intermediate state will again be productive, and will yield the same output and the same final state . That is, instead of consuming all the inputs first, and then producing all the outputs, it is possible to produce some of the outputs early, without jeopardizing the overall result. Provided that the streaming condition holds for and , then

for all finite lists .

As a simple example, consider the `buffering’ process , where

Note that , so is just a complicated way of writing as a . But the streaming condition holds for and (as you may check), so may be streamed. Operationally, the streaming version of consumes one list from the input list of lists, then peels off and produces its elements one by one; when they have all been delivered, it consumes the next input list, and so on.

The streaming version of is actually rather special, because the production steps can always completely exhaust the intermediate state. In contrast, consider the `regrouping’ example where

from the introduction (here, yields where , with when and otherwise). This transforms an input list of lists into an output list of lists, where each output `chunk’ except perhaps the last has length â€”if the content doesn’t divide up evenly, then the last chunk is short. One might hope to be able to stream , but it doesn’t quite work with the formulation so far. The problem is that is too aggressive, and will produce short chunks when there is still some input to consume. (Indeed, the streaming condition does not hold for and â€”why not?) One might try the more cautious producer :

But this never produces a short chunk, and so if the content doesn’t divide up evenly then the last few elements will not be extracted from the intermediate state and will be lost.

We need to combine these two producers somehow: the streaming process should behave cautiously while there is still remaining input, which might influence the next output; but it should then switch to a more aggressive strategy once the input is finished, in order to flush out the contents of the intermediate state. To achieve this, we define a more general *flushing stream* operator:

This takes an additional argument ; when the cautious producer is unproductive, and there is no remaining input to consume, it uses to flush out the remaining output elements from the state. Clearly, specializing to retrieves the original operator.

The corresponding metamorphism uses an *apomorphism* in place of the unfold. Define

Then behaves like , except that if and when stops being productive it finishes up by applying to the final state. Similarly, define flushing metamorphisms:

Then we have

for all finite lists if the streaming condition holds for and . In particular,

on finite inputs : the streaming condition does hold for and the more cautious , and once the input has been exhausted, the process can switch to the more aggressive .

The main advantage of streaming is that it can allow the change-of-representation process also to work on infinite inputs. With the plain metamorphism, this is not possible: the will yield no result on an infinite input, and so the will never get started, but the may be able to produce some outputs before having consumed all the inputs. For example, the streaming version of also works for infinite lists, providing that the input does not end with an infinite tail of empty lists. And of course, if the input never runs out, then there is no need ever to switch to the more aggressive flushing phase.

As a more interesting example, consider converting a fraction from base 3 to base 7:

We assume that the input digits are all either 0, 1 or 2, so that the number being represented is in the unit interval.

The fold in is of the wrong kind; but we have also

Here, the intermediate state can be seen as a defunctionalized representation of the function , and applies this function to :

Now there is an extra function between the and the ; but that’s no obstacle, because it fuses with the :

However, the streaming condition does not hold for and . For example,

That is, , but , so it is premature to produce the first digit 2 in base 7 having consumed only the first digit 1 in base 3. The producer is too aggressive; it should be more cautious while input remains that might invalidate a produced digit.

Fortunately, on the assumption that the input digits are all 0, 1, or 2, the unconsumed inputâ€”a tail of the original inputâ€”again represents a number in the unit interval; so from the state the range of possible unproduced outputs represents a number between and . If these both start with the same digit in base 7, then (and only then) is it safe to produce that digit. So we define

and we have

Now, the streaming condition holds for and (as you may check), and therefore

on finite digit sequences in base 3. Moreover, the streaming program works also on *infinite* digit sequences, where the original does not.

(Actually, the only way this could possibly produce a finite output in base 7 would be for the input to be all zeroes. Why? If we are happy to rule out this case, we could consider only the case of taking infinite input to infinite output, and not have to worry about reaching the end of the input or flushing the state.)

]]>

What is the correct way to write breadth first traversal of a ?

He’s thinking of “traversal” in the sense of the class, and gave a concrete declaration of rose trees:

It’s an excellent question.

First, let’s think about breadth-first enumeration of the elements of a tree. This isn’t compositional (a fold); but the related “level-order enumeration”, which gives a list of lists of elements, one list per level, is compositional:

Here, is “long zip with”; it’s similar to , but returns a list as long as its longer argument:

(It’s a nice exercise to define a notion of folds for , and to write as a fold.)

Given , breadth-first enumeration is obtained by concatenation:

Incidentally, this allows trees to be foldable, breadth-first:

Level-order enumeration is invertible, in the sense that you can reconstruct the tree given its shape and its level-order enumeration.

One way to define this is to pass the level-order enumeration around the tree, snipping bits off it as you go. Here is a mutually recursive pair of functions to relabel a tree with a given list of lists, returning also the unused bits of the lists of lists.

Assuming that the given list of lists is “big enough”—ie each list has enough elements for that level of the tree—then the result is well-defined. Then is determined by the equivalence

Here, the of a tree is obtained by discarding its elements:

In particular, if the given list of lists is the level-order of the tree, and so is exactly the right size, then will have no remaining elements, consisting entirely of empty levels:

So we can take a tree apart into its shape and contents, and reconstruct the tree from such data.

This lets us traverse a tree in breadth-first order, by performing the traversal just on the contents. We separate the tree into shape and contents, perform a list-based traversal, and reconstruct the tree.

This trick of traversal by factoring into shape and contents is explored in my paper Understanding Idiomatic Traversals Backwards and Forwards from Haskell 2013.

We’ve seen that level-order enumeration is invertible in a certain sense, and that this means that we perform traversal by factoring into shape and contents then traversing the contents independently of the shape. But the contents we’ve chosen is the level-order enumeration, which is a list of lists. Normally, one thinks of the contents of a data structure as simply being a list, ie obtained by breadth-first enumeration rather than by level-order enumeration. Can we do relabelling from the breadth-first enumeration too? Yes, we can!

There’s a very clever cyclic program for breadth-first relabelling of a tree given only a list, not a list of lists; in particular, breadth-first relabelling a tree with its own breadth-first enumeration gives back the tree you first thought of. In fact, the relabelling function is precisely the same as before! The trick comes in constructing the necessary list of lists:

Note that variable is defined cyclically; informally, the output leftovers on one level also form the input elements to be used for relabelling all the lower levels. Given this definition, we have

for any . This program is essentially due to Geraint Jones, and is derived in an unpublished paper Linear-Time Breadth-First Tree Algorithms: An Exercise in the Arithmetic of Folds and Zips that we wrote together in 1993.

We can use this instead in the definition of breadth-first traversal:

]]>

The idea goes back to the adjunction between extension and intension in set theory—you can define a set by its *extension*, that is by listing its elements:

or by its *intension*, that is by characterizing those elements:

Expressions in the latter form are called *set comprehensions*. They inspired a programming notation in the SETL language from NYU, and have become widely known through list comprehensions in languages like Haskell. The structure needed of sets or of lists to make this work is roughly that of a *monad*, and Phil Wadler showed how to generalize comprehensions to arbitrary monads, which led to the “**do**” notation in Haskell. Around the same time, Phil Trinder showed that comprehensions make a convenient database query language. The comprehension notation has been extended to cover other important aspects of database queries, particularly aggregation and grouping. Monads and aggregations have very nice algebraic structure, which leads to a useful body of laws to support database query optimization.

Just as a warm-up, here is a reminder about Haskell’s list comprehensions.

This (rather concocted) example yields the list of all values of the expression as is drawn from and from and such that is divisible by , namely .

To the left of the vertical bar is the *term* (an expression). To the right is a comma-separated sequence of *qualifiers*, each of which is either a *generator* (of the form , with a variable and a list expression ) or a *filter* (a boolean expression). The scope of a variable introduced by a generator extends to all subsequent generators and to the term. Note that, in contrast to the mathematical inspiration, bound variables need to be generated from some existing list.

The semantics of list comprehensions is defined by translation; see for example Phil Wadler’s Chapter 7 of The Implementation of Functional Programming Languages. It can be expressed equationally as follows:

(Here, denotes the empty sequence of qualifiers. It’s not allowed in Haskell, but it is helpful in simplifying the translation.)

Applying this translation to the example at the start of the section gives

More generally, a generator may match against a pattern rather than just a variable. In that case, it may bind multiple (or indeed no) variables at once; moreover, the match may fail, in which case it is discarded. This is handled by modifying the translation for generators to use a function defined by pattern-matching, rather than a straight lambda-abstraction:

or, more perspicuously,

It is clear from the above translation that the necessary ingredients for list comprehensions are , singletons, , and the empty list. The first three are the operations arising from lists as a functor and a monad, which suggests that the same translation might be applicable to other monads too. But the fourth ingredient, the empty list, does not come from the functor and monad structures; that requires an extra assumption:

Then the translation for list comprehensions can be generalized to other monads:

(so ). The actual monad to be used is implicit; if we want to be explicit, we could use a subscript, as in ““.

This translation is different from the one used in the Haskell language specification, which to my mind is a little awkward: the empty list crops up in two different ways in the translation of list comprehensions—for filters, and for generators with patterns—and these are generalized in two different ways to other monads (to the method of the class in the first case, and the method of the class in the second). I think it is neater to have a monad subclass with a single method subsuming both these operators. Of course, this does mean that the translation forces a monad comprehension with filters or possibly failing generators to be interpreted in a monad in the subclass rather than just —the type class constraints that are generated depend on the features used in the comprehension. (Perhaps this translation was tried in earlier versions of the language specification, and found wanting?)

Taking this approach gives basically the monad comprehension notation from Wadler’s Comprehending Monads paper; it loosely corresponds to Haskell’s **do** notation, except that the term is to the left of a vertical bar rather than at the end, and that filters are just boolean expressions rather than introduced using .

We might impose the law that is a “left” zero of composition, in the sense

or, in terms of comprehensions,

Informally, this means that any failing steps of the computation cleanly cut off subsequent branches. Conversely, we do not require that is a “right” zero too:

This would have the consequence that a failing step also cleanly erases any effects from earlier parts of the computation, which is too strong a requirement for many monads—particularly those of the “launch missiles now” variety. (The names “left-” and “right zero” make more sense when the equations are expressed in terms of the usual Haskell bind operator , which is a kind of sequential composition.)

One more ingredient is needed in order to characterize monads that correspond to “collection classes” such as sets and lists, and that is an analogue of set union or list append. It’s not difficult to see that this is inexpressible in terms of the operations introduced so far: given only collections of at most one element, any comprehension using generators of the form will only yield another such collection, whereas the union of two one-element collections will in general have two elements.

To allow any finite collection to be expressed, it suffices to introduce a binary union operator :

We require composition to distribute over union, in the following sense:

or, in terms of comprehensions,

For the remainder of this post, we will assume a monad in both and . Moreover, we will assume that is the unit of , and is both a left- and a right zero of composition. To stress the additional constraints, we will write “” for “” from now on. The intention is that such monads exactly capture collection classes; Phil Wadler has called these structures *ringads*. (He seems to have done so in an unpublished note *Notes on Monads and Ringads* from 1990, which is cited by some papers from the early 1990s. But Phil no longer has a copy of this note, and it’s not online anywhere… I’d love to see a copy, if anyone has one!)

(There are no additional methods; the class is the intersection of the two parent classes and , with the union of the two interfaces, together with the laws above.) I used roughly the same construction already in the post on Horner’s Rule.

As well as (finite) sets and lists, ringad instances include (finite) bags and a funny kind of binary tree (externally labelled, possibly empty, in which the empty tree is a unit of the binary tree constructor). These are all members of the so-called Boom Hierarchy of types—a name coined by Richard Bird, for an idea due to Hendrik Boom, who by happy coincidence is named for one of these structures in his native language. All members of the Boom Hierarchy are generated from the empty, singleton, and union operators, the difference being whether union is associative, commutative, and idempotent. Another ringad instance, but not a member of the Boom Hierarchy, is the type of probability distributions—either normalized, with a weight-indexed family of union operators, or unnormalized, with an additional scaling operator.

The well-behaved operations over monadic values are called the *algebras* for that monad—functions such that and . In particular, is itself a monad algebra. When the monad is also a ringad, necessarily distributes also over —there is a binary operator such that (exercise!). Without loss of generality, we write for ; these are the “reductions” of the Bird–Meertens Formalism. In that case, is a ringad algebra.

The algebras for a ringad amount to aggregation functions for a collection: the sum of a bag of integers, the maximum of a set of naturals, and so on. We could extend the comprehension notation to encompass aggregations too, for example by adding an optional annotation, writing say ““; although this doesn’t add much, because we could just have written “” instead. We could generalize from reductions to collection homomorphisms ; but this doesn’t add much either, because the map is easily combined with the comprehension—it’s easy to show the “map over comprehension” property

Leonidas Fegaras and David Maier develop a monoid comprehension calculus around such aggregations; but I think their name is inappropriate, because nothing forces the binary aggregating operator to be associative.

Note that, for to be well-defined, must satisfy all the laws that does— must be associative if is associative, and so on. It is not hard to show, for instance, that there is no on sets of numbers for which ; such an would have to be idempotent, which is inconsistent with its relationship with . (So, although denotes the sum of the squares of the odd elements of bag , the expression (with now a set) is not defined, because is not idempotent.) In particular, must be the unit of , which we write .

We can derive translation rules for aggregations from the definition

For empty aggregations, we have:

For filters, we have:

For generators, we have:

And for sequences of qualifiers, we have:

Putting all this together, we have:

We have seen that comprehensions can be interpreted in an arbitrary ringad; for example, denotes (the set of) the squares of the odd elements of (the set) , whereas denotes the bag of such elements, with a bag. Can we make sense of “heterogeneous comprehensions”, involving several different ringads?

Let’s introduced the notion of a *ringad morphism*, extending the familiar analogue on monads. For monads and , a monad morphism is a natural transformation —that is, a family of arrows, coherent in the sense that for —that also preserves the monad structure:

A ringad morphism for ringads is a monad morphism that also respects the ringad structure:

Then a ringad morphism behaves nicely with respect to ringad comprehensions—a comprehension interpreted in ringad , using existing collections of type , with the result transformed via a ringad morphism to ringad , is equivalent to the comprehension interpreted in ringad in the first place, but with the initial collections transformed to type . Informally, there will be no surprises arising from when ringad coercions take place, because the results are the same whenever this happens. This property is straightforward to show by induction over the structure of the comprehension. For the empty comprehension, we have:

For filters, we have:

For generators:

And for sequences of qualifiers:

For example, if is the obvious ringad morphism from bags to sets, discarding information about the multiplicity of repeated elements, and a bag of numbers, then

and both yield the set of squares of the odd members of . As a notational convenience, we might elide use of the ringad morphism when it is “obvious from context”—we might write just even when is a bag, relying on the “obvious” morphism . This would allow us to write, for example,

(writing for the extension of a bag), instead of the more pedantic

There is a forgetful function from any poorer member of the Boom hierarchy to a richer one, flattening some distinctions by imposing additional laws—for example, from bags to sets, flattening distinctions concerning multiplicity—and I would class these forgetful functions as “obvious” morphisms. On the other hand, any morphisms in the opposite direction—such as sorting, from bags to lists, and one-of-each, from sets to bags—are not “obvious”, and so should not be elided; and similarly, I’m not sure that I could justify as “obvious” any morphisms involving non-members of the Boom Hierarchy, such as probability distributions.

]]>

*Accumulations* or *scans* distribute information contained in a data structure across that data structure in a given direction. The paradigmatic example is computing the running totals of a list of numbers, which can be thought of as distributing the numbers rightwards across the list, summing them as you go. In Haskell, this is an instance of the operator:

A special case of this pattern is to distribute the elements of a list rightwards across the list, simply collecting them as you go, rather than summing them. That’s the function, and it too is an instance of :

It’s particularly special, in the sense that it is the most basic ; any other instance can be expressed in terms of it:

This is called the *Scan Lemma* for . Roughly speaking, it states that a replaces every node of a list with a applied to that node’s predecessors. Read from right to left, the scan lemma is an efficiency-improving transformation, eliminating duplicate computations; but note that this only works on expressions where is a , because only then are there duplicate computations to eliminate. It’s an important result, because it relates a clear and simple specification on the right to a more efficient implementation on the left.

However, the left-to-right operators , , and are a little awkward in Haskell, because they go against the grain of the cons-based (ie, right-to-left) structure of lists. I leave as a simple exercise for the reader the task of writing the more natural , , and , and identifying the relationships between them. Conversely, one can view etc as the natural operators for snoc-based lists, which are constructed from nil and snoc rather than from nil and cons.

What would , , , etc look like on different—and in particular, non-linear—datatypes? Let’s consider a simple instance, for homogeneous binary trees; that is, trees with a label at both internal and external nodes.

for which the obvious fold operator is

I’m taking the view that the appropriate generalization is to distribute data “upwards” and “downwards” through such a tree—from the leaves towards the root, and vice versa. This does indeed specialize to the definitions we had on lists when you view them vertically in terms of their “cons” structure: they’re long thin trees, in which every parent has exactly one child. (An alternative view would be to look at distributing data horizontally through a tree, from left to right and vice versa. Perhaps I’ll come back to that another time.)

The upwards direction is the easier one to deal with. An upwards accumulation labels every node of the tree with some function of its *descendants*; moreover, the descendants of a node themselves form a tree, so can be easily represented, and folded. So we can quite straightforwardly define:

where yields the root of a tree:

As with lists, the most basic upwards scan uses the constructors themselves as arguments:

and any other scan can be expressed, albeit less efficiently, in terms of this:

The downwards direction is more difficult, though. A downwards accumulation should label every node with some function of its *ancestors*; but these do not form another tree. For example, in the homogeneous binary tree

the ancestors of the node labelled are the nodes labelled . One could represent those ancestors simply as a list, ; but that rules out the possibility of a downwards accumulation treating left children differently from right children, which is essential in a number of algorithms (such as the parallel prefix and tree drawing algorithms in my thesis). A more faithful rendering is to define a new datatype of *paths* that captures the left and right turns—a kind of non-empty cons list, but with both a “left cons” and a “right cons” constructor:

(I called them “threads” in my thesis.) Then we can capture the data structure representing the ancestors of the node labelled

by the expression . I leave it as an exercise for the more energetic reader to work out a definition for

to compute the tree giving the ancestors of every node, and for a corresponding .

Having seen ad-hoc constructions for a particular kind of binary tree, we should consider what the datatype-generic construction looks like. I discussed datatype-generic upwards accumulations already, in the post on Horner’s Rule; the construction was given in the paper Generic functional programming with types and relations by Richard Bird, Oege de Moor and Paul Hoogendijk. As with homogeneous binary trees, it’s still the case that the generic version of labels every node of a data structure of type with the descendants of that node, and still the case that the descendants form a data structure also of type . However, in general, the datatype does not allow for a label at every node, so we need the *labelled variant* where . Then we can define

where returns the root label of a labelled data structure—by construction, every labelled data structure has a root label—and is the unique arrow to the unit type. Moreover, we get a datatype-generic operator, and a Scan Lemma:

The best part of a decade after my thesis work, inspired by the paper by Richard Bird & co, I set out to try to define datatype-generic versions of downward accumulations too. I wrote a paper about it for MPC 1998, and then came up with a new construction for the journal version of that paper in SCP in 2000. I now think these constructions are rather clunky, and I have a better one; if you don’t care to explore the culs-de-sac, skip this section and the next and go straight to the section on derivatives.

The MPC construction was based around a datatype-generic version of the datatype above, to represent the “ancestors” of a node in an inductive datatype. The tricky bit is that data structures in general are non-linear—a node may have many children—whereas paths are linear—every node has exactly one child, except the last which has none; how can we define a “linear version” of ? Technically, we might say that a functor is linear (actually, “affine” would be a better word) if it distributes over sum.

The construction in the paper assumed that was a sum of products of literals

where each is either , , or some constant type such as or . For example, for leaf-labelled binary trees

the shape functor is , so (there are two variants), (the first variant has a single literal, ) and (the second variant has two literals, and ), and:

Then for each we define a -ary functor , where is the “degree of branching” of variant (ie, the number of s occurring in , which is the number of for which ), in such a way that

and is linear in each argument except perhaps the first. It’s a bit messy explicitly to give a construction for , but roughly speaking,

where is “the next unused ” when , and just otherwise. For example, for leaf-labelled binary trees, we have:

Having defined the linear variant of , we can construct the datatype of paths, as the inductive datatype of shape where

That is, paths are a kind of non-empty cons list. The path ends at some node of the original data structure; so the last element of the path is of type , which records the “local content” of a node (its shape and labels, but without any of its children). Every other element of the path consists of the local content of a node together with an indication of which direction to go next; this amounts to the choice of a variant , followed by the choice of one of identical copies of the local contents of variant , where is the degree of branching of variant . We model this as a base constructor and a family of “cons” constructors for and .

For example, for leaf-labelled binary trees, the “local content” for the last element of the path is either a single label (for tips) or void (for bins), and for the other path elements, there are zero copies of the local content for a tip (because a tip has zero children), and two copies of the void local information for bins (because a bin has two children). Therefore, the path datatype for such trees is

which is isomorphic to the definition that you might have written yourself:

For homogeneous binary trees, the construction gives

which is almost the ad-hoc definition we had two sections ago, except that it distinguishes singleton paths that terminate at an external node from those that terminate at an internal one.

Now, analogous to the function which labels every node with its descendants, we can define a function to label every node with its ancestors, in the form of the path to that node. One definition is as a fold; informally, at each stage we construct a singleton path to the root, and map the appropriate “cons” over the paths to each node in each of the children (see the paper for a concrete definition). This is inefficient, because of the repeated maps; it’s analogous to defining by

A second definition is as an unfold, maintaining as an accumulating parameter of type the “path so far”; this avoids the maps, but it is still quadratic because there are no common subexpressions among the various paths. (This is analogous to an accumulating-parameter definition of :

Even with an accumulating “Hughes list” parameter, it still takes quadratic time.)

The downwards accumulation itself is defined as a path fold mapped over the paths, giving a Scan Lemma for downwards accumulations. With either the fold or the unfold definition of paths, this is still quadratic, again because of the lack of common subexpressions in a result of quadratic size. However, in some circumstances the path fold can be reassociated (analogous to turning a into a ), leading finally to a linear-time computation; see the paper for the details of how.

I was dissatisfied with the “…”s in the MPC construction of datatype-generic paths, but couldn’t see a good way of avoiding them. So in the subsequent SCP version of the paper, I presented an alternative construction of downwards accumulations, which does not go via a definition of paths; instead, it goes directly to the accumulation itself.

As with the efficient version of the MPC construction, it is coinductive, and uses an accumulating parameter to carry in to each node the seed from higher up in the tree; so the downwards accumulation is of type . It is defined as an unfold, with a body of type

The result of applying the body will be constructed from two components, of types and : the first gives the root label of the accumulation and the seeds for processing the children, and the second gives the children themselves.

These two components get combined to make the whole result via a function

This will be partial in general, defined only for pairs of -structures of the same shape.

The second component of is the easier to define; given input , it unpacks the to , and discards the and the (recall that is the labelled variant of , where ).

For the first component, we enforce the constraint that all output labels are dependent only on their ancestors by unpacking the and pruning off the children, giving input . We then suppose as a parameter to the accumulation a function of type to complete the construction of the first component. In order that the two components can be zipped together, we require that is shape-preserving in its second argument:

where is the unique function to the unit type. Then, although the built from these two components depends on the partial function , it will still itself be total.

The SCP construction gets rid of the “…”s in the MPC construction. It is also inherently efficient, in the sense that if the core operation takes constant time then the whole accumulation takes linear time. However, use of the partial function to define a total accumulation is a bit unsatisfactory, taking us outside the domain of sets and total functions. Moreover, there’s now only half an explanation in terms of paths: accumulations in which the label attached to each node depends only on the *list* of its ancestors, and not on the left-to-right ordering of siblings, can be factored into a list function (in fact, a ) mapped over the “paths”, which is now a tree of lists; but accumulations in which left children are treated differently from right children, such as the parallel prefix and tree drawing algorithms mentioned earlier, can not.

After another interlude of about a decade, and with the benefit of new results to exploit, I had a “eureka” moment: the linearization of a shape functor is closely related to the beautiful notion of the *derivative* of a datatype, as promoted by Conor McBride. The crucial observation Conor made is that the “one-hole contexts” of a datatype—that is, for a container datatype, the datatype of data structures with precisely one element missing—can be neatly formalized using an analogue of the rules of differential calculus. The one-hole contexts are precisely what you need to identify which particular child you’re talking about out of a collection of children. (If you’re going to follow along with some coding, I recommend that you also read Conor’s paper Clowns to the left of me, jokers to the right. This gives the more general construction of *dissecting* a datatype, identifying a unique hole, but also allowing the “clowns” to the left of the hole to have a different type from the “jokers” to the right. I think the explanation of the relationship with the differential calculus is much better explained here; the original notion of derivative can be retrieved by specializing the clowns and jokers to the same type.)

The essence of the construction is the notion of a *derivative* of a functor . For our purposes, we want the derivative in the second argument only of a bifunctor; informally, is like , but with precisely one missing. Given such a one-hole context, and an element with which to plug the hole, one can reconstruct the whole structure:

That’s how to consume one-hole contexts; how can we produce them? We could envisage some kind of inverse of , which breaks an -structure into an element and a context; but this requires us to invent a language for specifying which particular element we mean— is not injective, so needs an extra argument. A simpler approach is to provide an operator that annotates every position at once with the one-hole context for that position:

One property of is that it really is an annotation—if you throw away the annotations, you get back what you started with:

A second property relates it to —each of elements in a hole position plugs into its associated one-hole context to yield the same whole structure back again:

(I believe that those two properties completely determine and .)

Incidentally, the derivative of a bifunctor can be elegantly represented as an *associated type synonym* in Haskell, in a type class of bifunctors differentiable in their second argument, along with and :

Conor’s papers show how to define instances of for all polynomial functors —anything made out of constants, projections, sums, and products.

The path to a node in a data structure is simply a list of one-hole contexts—let’s say, innermost context first, although it doesn’t make much difference—but with all the data off the path (that is, the other children) stripped away:

This is a projection of Huet’s zipper, which preserves the off-path children, and records also the subtree in focus at the end of the path:

Since the contexts are listed innermost-first in the path, closing up a zipper to reconstruct a tree is a over the path:

Now, let’s develop the function , which turns a tree into a labelled tree of paths. We will write it with an accumulating parameter, representing the “path so far”:

Given the components of a tree and a path to its root, must construct the corresponding labelled tree of paths. Since and , this amounts to constructing a value of type . For the first component of this pair we will use , the path so far. The second component can be constructed from by identifying all children via , discarding some information with judicious s, consing each one-hole context onto to make a longer path, then making recursive calls on each child:

That is,

Downwards accumulations are then path functions mapped over the result of . However, we restrict ourselves to path functions that are instances of , because only then are there common subexpressions to be shared between a parent and its children (remember that paths are innermost-first, so related nodes share a tail of their ancestors).

Moreover, it is straightforward to fuse the with , to obtain

which takes time linear in the size of the tree, assuming that and take constant time.

Finally, in the case that the function being mapped over the paths is a as well as a , then we can apply the Third Homomorphism Theorem to conclude that it is also an associative fold over lists. From this (I believe) we get a very efficient parallel algorithm for computing the accumulation, taking time logarithmic in the size of the tree—even if the tree has greater than logarithmic depth.

]]>

the essential property behind Horner’s Rule is one of distributivity. In the datatype-generic case, we will model this as follows. We are given an -algebra [for a binary shape functor ], and a -algebra [for a collection monad ]; you might think of these as “datatype-generic product” and “collection sum”, respectively. Then there are two different methods of computing a result from an structure: we can either distribute the structure over the collection(s) of s, compute the “product” of each structure, and then compute the “sum” of the resulting products; or we can “sum” each collection, then compute the “product” of the resulting structure. Distributivity of “product” over “sum” is the property that these two different methods agree, as illustrated in the following diagram.

For example, with adding all the integers in an -structure, and finding the maximum of a (non-empty) collection, the diagram commutes.

There’s a bit of hand-waving above to justify the claim that this is really a kind of distributivity. What does it have to do with the common-or-garden equation

stating distributivity of one binary operator over another? That question is the subject of this post.

Recall that distributes the shape functor over the monad in its second argument; this is the form of distribution over effects that crops up in the datatype-generic Maximum Segment Sum problem. More generally, this works for any idiom ; this will be important below.

Generalizing in another direction, one might think of distributing over an idiom in both arguments of the bifunctor, via an operator , which is to say, , natural in the . This is the method of the subclass of that Bruno Oliveira and I used in our Essence of the Iterator Pattern paper; informally, it requires just that has a finite ordered sequence of “element positions”. Given , one can define .

That traversability (or equivalently, distributivity over effects) for a bifunctor is definable for any idiom, not just any monad, means that one can also conveniently define an operator for any traversable unary functor . This is because the constant functor (which takes any to ) is an idiom: the method returns the empty list, and idiomatic application appends two lists. Then one can define

where makes a singleton list. For a traversable bifunctor , we define where is the diagonal functor; that is, , natural in the . (No constant functor is a monad, except in trivial categories, so this convenient definition of contents doesn’t work monadically. Of course, one can use a writer monad, but this isn’t quite so convenient, because an additional step is needed to extract the output.)

One important axiom of that I made recent use of in a paper with Richard Bird on Effective Reasoning about Effectful Traversals is that it should be “natural in the contents”: it should leave shape unchanged, and depend on contents only up to the extent of their ordering. Say that a natural transformation between traversable functors and “preserves contents” if . Then, in the case of unary functors, the formalization of “naturality in the contents” requires to respect content-preserving :

In particular, itself preserves contents, and so we expect

to hold.

Happily, the same generic operation provides a datatype-generic means to “fold” over the elements of an -structure. Given a binary operator and an initial value , we can define an -algebra —that is, a function —by

(This is a slight specialization of the presentation of the datatype-generic MSS problem from last time; there we had . The specialization arises because we are hoping to define such an given a homogeneous binary operator . On the other hand, the introduction of the initial value is no specialization, as we needed such a value for the “product” of an empty “segment” anyway.)

Incidentally, I believe that this “generic folding” construction is exactly what is intended in Ross Paterson’s Data.Foldable library.

The other ingredient we need is an -algebra . We already decided last time to

stick to

reductions—s of the form for associative binary operator ; then we also have distribution over choice: . Note also that we prohibited empty collections in , so we do not need a unit for .

On account of being an algebra for the collection monad , we also get a singleton rule .

One of the take-home messages in the *Effective Reasoning about Effectful Traversals* paper is that it helps to reduce a traversal problem for datatypes in general to a more specific one about lists, exploiting the “naturality in contents” property of traversability. We’ll use that tactic for the distributivity property in the datatype-generic version Horner’s Rule.

In this diagram, the perimeter is the commuting diagram given at the start of this post—the diagram we have to justify. Face (1) is the definition of in terms of . Faces (2) and (3) are the expansion of as generic folding of an -structure. Face (4) follows from being an -algebra, and hence being a left-inverse of . Face (5) is an instance of the naturality property of . Face (6) is the property that respects the contents-preserving transformation . Therefore, the whole diagram commutes if Face (7) does—so let’s look at Face (7)!

Here’s Face (7) again:

Demonstrating that this diagram commutes is not too difficult, because both sides turn out to be list folds.

Around the left and bottom edges, we have a fold after a map , which automatically fuses to , where is defined by

or, pointlessly,

Around the top and right edges we have the composition . If we can write as an instance of , we can then use the fusion law for

to prove that this composition equals .

In fact, there are various equivalent ways of writing as an instance of . The definition given by Conor McBride and Ross Paterson in their original paper on idioms looked like the identity function, but with added idiomness:

In the special case that the idiom is a monad, it can be written in terms of (aka ) and :

But we’ll use a third definition:

where

That is,

Now, for the base case we have

as required. For the inductive step, we have:

which completes the fusion proof, modulo the wish about distributivity for :

As for that wish about distributivity for :

which discharges the proof obligation about distributivity for cartesian product, but again modulo two symmetric wishes about distributivity for collections:

Finally, the proof obligations about distributivity for collections are easily discharged, by induction over the size of the (finite!) collection, provided that the binary operator distributes over in the familiar sense. The base case is for a singleton collection, ie in the image of (because we disallowed empty collections); this case follows from the fact that is an -algebra. The inductive step is for a collection of the form with both strictly smaller than the whole (so, if the monad is idempotent, disjoint, or at least not nested); this requires the distribution of the algebra over choice , together with the familiar distribution of over .

So, the datatype-generic distributivity for -structures of collections that we used for the Maximum Segment Sum problem reduced to distributivity for lists of collections, which reduced to the cartesian product of collections, which reduced to that for pairs. That’s a much deeper hierarchy than I was expecting; can it be streamlined?

]]>

The original problem is as follows. Given a list of numbers (say, a possibly empty list of integers), find the largest of the sums of the contiguous segments of that list. In Haskell, this specification could be written like so:

where computes the contiguous segments of a list:

and computes the sum of a list, and the maximum of a nonempty list:

This specification is executable, but takes cubic time; the problem is to do better.

We can get quite a long way just with standard properties of , , etc:

For the final step, if we can write in the form , then the of this can be fused with the to yield ; this observation is known as the *Scan Lemma*. Moreover, if takes constant time, then this gives a linear-time algorithm for .

The crucial observation is based on Horner’s Rule for evaluation of polynomials, which is the first important thing you learn in numerical computing—I was literally taught it in secondary school, in my sixth-year classes in mathematics. Here is its familiar form:

but the essence of the rule is about sums of products:

Expressed in Haskell, this is captured by the equation

(where computes the product of a list of integers).

But Horner’s Rule is not restricted to sums and products; the essential properties are that addition and multiplication are associative, that multiplication has a unit, and that multiplication distributes over addition. This the algebraic structure of a *semiring* (but without needing commutativity and a unit of addition, or that that unit is a zero of multiplication). In particular, the so-called *tropical semiring* on the integers, in which “addition” is binary and “multiplication” is integer addition, satisfies the requirements. So for the maximum segment sum problem, we get

Moreover, takes constant time, so this gives a linear-time algorithm for .

About a decade after the initial “theory of lists” work on the maximum segment sum problem, Richard Bird (with Oege de Moor and Paul Hoogendijk) came up with a datatype-generic version of the problem in the paper Generic functional programming with types and relations. It’s clear what “maximum” and “sum” mean generically, but not so clear what “segment” means for nonlinear datatypes; the point of their paper is basically to resolve that issue.

Recalling the definition of in terms of and , we see that it would suffice to develop datatype-generic notions of “initial segment” and “tail segment”. One fruitful perspective is given in Bird & co’s paper: a “tail segment” of a cons list is just a subterm of that list, and an “initial segment” is the list but with some tail (that is, some subterm) replaced with the empty structure.

So, representing a generic “tail” of a data structure is easy: it’s a data structure of the same type, and a subterm of the term denoting the original structure. A datatype-generic definition of is a little trickier, though. For lists, you can see it as follows: *every node of the original list is labelled with the subterm of the original list rooted at that node*. I find this a helpful observation, because it explains why the of a list is one element longer than the list itself: a list with elements has nodes ( conses and a nil), and each of those nodes gets labelled with one of the subterms of the list. Indeed, ought morally to take a possibly empty list and return a *non-empty list* of possibly empty lists—there are two different datatypes involved. Similarly, if one wants the “tails” of a data structure of a type in which some nodes have no labels (such as leaf-labelled trees, or indeed such as the “nil” constructor of lists), one needs a variant of the datatype providing labels at those positions. Also, for a data structure in which some nodes have multiple labels, or in which there are different types of labels, one needs a variant for which *every node has precisely one label*.

Bird & co call this the *labelled variant* of the original datatype; if the original is a polymorphic datatype for some binary shape functor , then the labelled variant is where —whatever labels may or may not have specified are ignored, and precisely one label per node is provided. Given this insight, it is straightforward to define a datatype-generic variant of the function:

where returns the root label of a labelled data structure, and is the unique arrow to the unit type. (Informally, having computed the tree of subterms for each child of a node, we make the tree of subterms for this node by assembling all the child trees with the label for this node; the label should be the whole structure rooted at this node, which can be reconstructed from the roots of the child trees.) What’s more, there’s a datatype-generic scan lemma too:

(Again, the label for each node can be constructed from the root labels of each of the child trees.) In fact, and are paramorphisms, and can also be nicely written coinductively as well as inductively. I’ll return to this in a future post.

What about a datatype-generic “initial segment”? As suggested above, that’s obtained from the original data structure by replacing some subterms with the empty structure. Here I think Bird & co sell themselves a little short, because they insist that the datatype supports empty structures, which is to say, that is of the form for some . This isn’t necessary: for an arbitrary , we can easily manufacture the appropriate datatype of “data structures in which some subterms may be replaced by empty”, by defining and .

As with , the datatype-generic version of is a bit trickier—and this time, the special case of lists is misleading. You might think that because a list has just as many initial segments as it does tail segments, so the labelled variant ought to suffice just as well here too. But this doesn’t work for non-linear data structures such as trees—in general, there are many more “initial” segments than “tail” segments (because one can make independent choices about replacing subterms with the empty structure in each child), and they don’t align themselves conveniently with the nodes of the original structure.

The approach I prefer here is just to use an unstructured collection type to hold the “initial segments”; that is, a monad. This could be the monad of finite lists, or of finite sets, or of finite bags—we will defer until later the discussion about precisely which, and write simply . We require only that it provide a -like interface, in the sense of an operator ; however, for reasons that will become clear, we will expect that it does *not* provide a operator yielding empty collections.

Now we can think of the datatype-generic version of as nondeterministically pruning a data structure by arbitrarily replacing some subterms with the empty structure; or equivalently, as generating the collection of all such prunings.

Here, supplies a new alternative for a nondeterministic computation:

and distributes the shape functor over the monad (which can be defined for all functors ). Informally, once you have computed all possible ways of pruning each of the children of a node, a pruning of the node itself is formed either as some node assembled from arbitrarily pruned children, or for the empty structure.

As we’ve seen, the essential property behind Horner’s Rule is one of distributivity. In the datatype-generic case, we will model this as follows. We are given an -algebra , and a -algebra ; you might think of these as “datatype-generic product” and “collection sum”, respectively. Then there are two different methods of computing a result from an structure: we can either distribute the structure over the collection(s) of s, compute the “product” of each structure, and then compute the “sum” of the resulting products; or we can “sum” each collection, then compute the “product” of the resulting structure. Distributivity of “product” over “sum” is the property that these two different methods agree, as illustrated in the following diagram.

For example, with adding all the integers in an -structure, and finding the maximum of a (non-empty) collection, the diagram commutes. (To match up with the rest of the story, we have presented distributivity in terms of a bifunctor , although the first parameter plays no role. We could just have well have used a unary functor, dropping the , and changing the distributor to .)

Note that is required to be an algebra for the monad . This means that it is not only an algebra for as a functor (namely, of type ), but also it should respect the extra structure of the monad: and . For the special case of monads for associative collections (such as lists, bags, and sets), and in homage to the old Squiggol papers, we will stick to *reductions*—s of the form for associative binary operator ; then we also have distribution over choice: . Note also that we prohibited empty collections in , so we do not need a unit for .

Recall that we modelled an “initial segment” of a structure of type as being of type , where . We need to generalize “product” to work on this extended structure, which is to say, we need to specify the value of the “product” of the empty structure too. Then we let , so that .

The datatype-generic version of Horner’s Rule is then about computing the “sum” of the “products” of each of the “initial segments” of a data structure:

We will use fold fusion to show that this can be computed as a single fold, given the necessary distributivity property.

(Sadly, I have to break this calculation in two to get it through WordPress’s somewhat fragile LaTeX processor… where were we? Ah, yes:)

Therefore,

(Curiously, it doesn’t seem to matter what value is chosen for .)

We’re nearly there. We start with the traversable shape bifunctor , a collection monad , and a distributive law . We are given an -algebra , an additional element , and a -algebra , such that and take constant time and distributes over in the sense above. Then

can be computed in linear time, where

and where

computes the contents of an -structure (which, like , can be defined using the traversability of ). Here’s the calculation:

The scan can be computed in linear time, because its body takes constant time; moreover, the “sum” and can also be computed in linear time (and what’s more, they can be fused into a single pass).

For example, with adding all the integers in an -structure, , and returning the greater of two integers, we get a datatype-generic version of the linear-time maximum segment sum algorithm.

As the title of their paper suggests, Bird & co carried out their development using the relational approach set out in the *Algebra of Programming* book; for example, their version of is a relation between data structures and their prunings, rather than being a function that takes a structure and returns the collection of all its prunings. There’s a well-known isomorphism between relations and set-valued functions, so their relational approach roughly looks equivalent to the monadic one I’ve taken.

I’ve known their paper well for over a decade (I made extensive use of the “labelled variant” construction in my own papers on generic downwards accumulations), but I’ve only just noticed that although they discuss the maximum segment sum problem, they don’t discuss problems based on other semirings, such as the obvious one of integers with addition and multiplication—which is, after all, the origin of Horner’s Rule. Why not? It turns out that the relational approach doesn’t work in that case!

There’s a hidden condition in the calculation, which relates back to our earlier comment about which collection monad—finite sets, finite bags, lists, etc—to use. When is the set monad, distribution over choice ()—and consequently the condition that we used in proving Horner’s Rule—require to be idempotent, because itself is idempotent; but addition is not idempotent. For the same reason, the distributivity property does not hold for addition with the set monad. But everything does work out for the bag monad, for which is not idempotent. The bag monad models a flavour of nondeterminism in which multiplicity of results matters—as it does for the sum-of-products instance of the problem, when two copies of the same segment should be treated differently from just one copy. Similarly, if the order of results matters—if, for example, we were looking for the “first” solution—then we would have to use the list monad rather than bags or sets. Seen from a monadic perspective, the relational approach is *programming with just one monad*, namely the set monad; if that monad doesn’t capture your effects faithfully, you’re stuck.

(On the other hand, there are aspects of the problem that work much better relationally. We have carefully used only for a linear order, namely the usual ordering of the integers. A partial order is more awkward monadically, because there need not be a unique maximal value. For example, it is not so easy to compute a segment with maximal sum, unless we refine the sum ordering on segments to make it once more a linear order; relationally, this works out perfectly straightforwardly. We can try the same trick of turning the relation “maximal under a partial order” into the collection-valued function “all maxima under a partial order”, but I fear that the equivalent trick on the ordering itself—turning the relation “” into the collection-valued function “all values less than this one”—runs into problems from taking us outside the world of *finite* nondeterminism.)

]]>

We know and love initial algebras, because of the ease of reasoning with their universal properties. We can tell a simple story about recursive programs, solely in terms of sets and total functions. As we discussed in the previous post, given a functor , an -algebra is a pair consisting of an object and an arrow . A *homomorphism* between -algebras and is an arrow such that :

The -algebra is *initial* iff there is a unique such for each ; for well-behaved functors , such as the polynomial functors on , an initial algebra always exists. We conventionally write “” for the initial algebra, and “” for the unique homomorphism to another -algebra . (In , initial algebras correspond to datatypes of finite recursive data structures.)

The uniqueness of the solution is captured in the universal property:

In words, is this fold iff satisfies the defining equation for the fold.

The universal property is crucial. For one thing, the homomorphism equation is a very convenient style in which to define a function; it’s the datatype-generic abstraction of the familiar pattern for defining functions on lists:

These two equations implicitly characterizing are much more comprehensible and manipulable than a single equation

explicitly giving a value for . But how do we know that this assortment of two facts about is enough to form a definition? Of course! A system of equations in this form has a *unique solution*.

Moreover, the very expression of the uniqueness of the solution as an equivalence provides many footholds for reasoning:

- Read as an implication from left to right, instantiating to to make the left-hand side trivially true, we get an
*evaluation rule*for folds:

- Read as an implication from right to left, we get a proof rule for demonstrating that some complicated expression is a fold:

- In particular, we can quickly see that the identity function is a fold:

so . (In fact, this one’s an equivalence.)

- We get a very simple proof of a
*fusion rule*, for combining a following function with a fold to make another fold:

- Using this, we can deduce
*Lambek’s Lemma*, that the constructors form an isomorphism. Supposing that there is a right inverse, and it is a fold, what must it look like?

So if we define , we get . We should also check the left inverse property:

And so on, and so on. Many useful functions can be written as instances of , and the universal property gives us a very powerful reasoning tool—the universal property of is a marvel to behold.

And of course, it all dualizes beautifully. An -coalgebra is a pair with . A homomorphism between -coalgebras and is a function such that :

The -coalgebra is *final* iff there is a unique homomorphism *to* it from each ; again, for well-behaved , final coalgebras always exist. We write “” for the final coalgebra, and for the unique homomorphism to it. (In , final coalgebras correspond to datatypes of finite-or-infinite recursive data structures.)

Uniqueness is captured by the universal property

which has just as many marvellous consequences. Many other useful functions are definable as instances of , and again the universal property gives a very powerful tool for reasoning with them.

There are also many interesting functions that are best described as a *combination* of a fold and an unfold. The *hylomorphism* pattern, with an unfold followed by a fold, is the best known: the unfold produces a recursive structure, which the fold consumes.

The factorial function is a simple example. The datatype of lists of natural numbers is determined by the shape functor

Then we might hope to write

where and with

More elaborately, we might hope to write as the composition of (to generate a binary search tree) and (to flatten that tree to a list), where is the shape functor for internally-labelled binary trees,

partitions a list of integers into the unit or a pivot and two sublists, and

glues together the unit or a pivot and two sorted lists into one list. In fact, any divide-and-conquer algorithm can be expressed in terms of an unfold computing a tree of subproblems top-down, followed by a fold that solves the subproblems bottom-up.

But sadly, this doesn’t work in , because the types don’t meet in the middle. The source type of the fold is (the carrier of) an initial algebra, but the target type of the unfold is a final coalgebra, and these are different constructions.

This is entirely reasonable, when you think about it. Our definitions in —the category of sets and total functions—necessarily gave us folds and unfolds as total functions; the composition of two total functions is a total function, and so a fold after an unfold ought to be a total function too. But it is easy to define total instances of that generate infinite data structures (such as a function , which generates an infinite ascending list of naturals), on which a following fold is undefined (such as “the product” of an infinite ascending list of naturals). The composition then should not be a total function.

One might try interposing a conversion function of type , coercing the final data structure produced by the unfold into an initial data structure for consumption by the fold. But there is no canonical way of doing this, because final data structures may be “bigger” (perhaps infinitely so) than initial ones. (In contrast, there is a canonical function of type . In fact, there are two obvious definitions of it, and they agree—a nice exercise!)

One might try parametrizing that conversion function with a natural number, bounding the depth to which the final data structure is traversed. Then the coercion is nicely structural (in fact, it’s a fold over the depth), and everything works out type-wise. But having to thread such “resource bounds” through the code does terrible violence to the elegant structure; it’s not very satisfactory.

The usual solution to this conundrum is to give up on , and to admit that richer domain structures than sets and total functions are required. Specifically, in order to support recursive definitions in general, and the hylomorphism in particular, one should move to the category of *continuous functions* between *complete partial orders* (CPOs). Now is not the place to give all the definitions; see any textbook on denotational semantics. The bottom line, so to speak, is that one has to accept a definedness ordering on values—both on “data” and on functions—and allow some values to be less than fully defined.

Actually, in order to give meaning to all recursive definitions, one has to further restrict the setting to *pointed* CPOs—in which there is a least-defined “bottom” element for each type , which can be given as the “meaning” (solution) of the degenerate recursive definition at type . Then there is no “empty” CPO; the smallest CPO has just a single element, namely . As with colimits in general, this smallest object is used as the start of a chain of approximations to a limiting solution. But in order for really to be an initial object, one also has to constrain the arrows to be *strict*, that is, to preserve ; only then is there a unique arrow for each . The category of *strict continuous functions* between *pointed CPOs* is called .

It so happens that in , initial algebras and final coalgebras coincide: the objects (pointed CPOs) and are identical. This is very convenient, because it means that the hylomorphism pattern works fine: the structure generated by the unfold is exactly what is expected by the fold.

Of course, it still happen that the composition yields a “partial” (less than fully defined) function; but at least it now type-checks. Categories with this initial algebra/final coalgebra coincidence are called *algebraically compact*; they were studied by Freyd, but there’s a very good survey by Adámek, Milius and Moss.

However, the story gets murkier than that. For one thing, does not have proper products. (Indeed, an algebraically compact category with products collapses.) But beyond that, —with its restriction to strict arrows—is not a good model of lazy functional programming; , with non-strict arrows too, is better. So one needs a careful balance of the two categories. The consequences for initial algebras and final coalgebras are spelled out in one of my favourite papers, Program Calculation Properties of Continuous Algebras by Fokkinga and Meijer. In a nutshell, one can only say that the defining equation for folds has a unique *strict* solution in ; without the strictness side-condition, is unconstrained (because for any ). But the situation for coalgebras remains unchanged—the defining equation for unfolds has a unique solution (and moreover, it is strict when is strict).

This works, but it means various strictness side-conditions have to be borne in mind when reasoning about folds. Done rigorously, it’s rather painful.

So, back to my confession. I want to write divide-and-conquer programs, which produce intermediate data structures and then consume them. Folds and unfolds in do not satisfy me; I want more—hylos. Morally, I realise that I should pay careful attention to those strictness side-conditions. But they’re so fiddly and boring, and my resolve is weak, so I usually just brush them aside. Is there away that I can satisfy my appetite for divide-and-conquer programs while still remaining in the pure world?

Tarmo Uustalu and colleagues have a suggestion. Final coalgebras and algebraic compactness are sufficient but not necessary for the hylo diagram above to have a unique solution; they propose to focus on *recursive coalgebras* instead. The -coalgebra is “recursive” iff, for each , there is a unique such that :

This is a generalization of initial algebras: if has an initial algebra , then by Lambek’s Lemma has an inverse , and is a recursive coalgebra. And it is a strict generalization: it also covers patterns such as *paramorphisms* (primitive recursion)—since is a recursive -coalgebra where is the functor taking to —and the “back one or two steps” pattern used in the Fibonacci function.

Crucially for us, almost by definition it covers all of the “reasonable” hylomorphisms too. For example, is a recursive -coalgebra, where is the shape functor for lists of naturals and the -coalgebra introduced above that analyzes a natural into nothing (for zero) or itself and its predecessor (for non-zero inputs). Which is to say, for each , there is a unique such that ; in particular, for the given above that returns 1 or multiplies, the unique is the factorial function. (In fact, this example is also an instance of a paramorphism.) And is a recursive -coalgebra, where is the partition function of quicksort—for any -algebra , there is a unique such that , and in particular when is the glue function for quicksort, that unique solution is quicksort itself.

This works perfectly nicely in ; there is no need to move to more complicated settings such as or , or to consider partiality, or strictness, or definedness orderings. The only snag is the need to prove that a particular coalgebra of interest is indeed recursive. Capretta et al. study a handful of “basic” recursive coalgebras and of constructions on coalgebras that preserve recursivity.

More conveniently, Taylor and Adámek et al. relate recursivity of coalgebras to the more familiar notion of *variant* function, ie well-founded ordering on arguments of recursive calls. They restrict attention to *finitary* shape functors; technically, preserving directed colimits, but informally, I think that’s equivalent to requiring that each element of has a finite number of elements—so polynomial functors are ok, as is the finite powerset functor, but not powerset in general. If I understand those sources right, for a finitary functor and an -coalgebra , the following conditions are equivalent: (i) is corecursive; (ii) is well-founded, in the sense that there is a well-founded ordering such that for each “element” of ; (iii) every element of has finite depth; and (iv) there is a coalgebra homomorphism from to .

This means that I can resort to simple and familiar arguments in terms of variant functions to justify hylo-style programs. The factorial function is fine, because ( is a finitary functor, being polynomial, and) the chain of recursive calls to which leads is well-founded; quicksort is fine, because the partitioning step is well-founded; and so on. Which takes a great weight of guilt off my shoulders: I can give in to the temptation to write interesting programs, and still remain morally as pure as the driven snow.

]]>