Lazy factorial

This post refers to Parvum, a research compiler/language of mine presented in the previous Post. Its a compiler written in Haskell, that translates a “lambda calculus”-like language to Prexonite byte code assembler.

Did you know that a tiny modification to Parvum makes the language (most likely) turing-complete? By adopting non-strict (lazy) evaluation, one can implement all sorts of control structures in nothing but Parvum, even without language support for boolean values and conditions.

Proof? Yes, please! Here you see the implementation of the factorial function in Parvum (currently hardwired to compute the factorial of 8):

  bind (\n. n (\x. x + 1) 0) \toInt.
  bind (\f x. x) \zero.
  bind (\f x. f x) \one.
  bind (\f x. f (f (f (f x)))) \four.
  bind (\n f x. f (n f x)) \succ.
  bind (\p a b. p a b) \ifthenelse.
  bind (\x y. x) \true.
  bind zero \false.
  bind (\n. n (\x. false) true) \isZero.
  bind (\n m. m succ n) \plus.
  bind (\n. n (\g k. isZero (g one) k (plus (g k) one) ) (\v. zero) zero) \pred.
  bind (\m n f. m (n f)) \mul .
  bind (\self n. 
    ifthenelse (isZero n)
      (mul n (self self (pred n)))
    ) \facrec.
  bind (\n. facrec facrec n) \fac.

  toInt (fac (succ (succ (succ (succ (succ four))))))
)(\var body. body var)

76 seconds and a peak working set of over 650 MiB later, the number 362880 appears on the console. It works indeed.

I spare you the compiled byte code as it consists of 47 functions. Note that Prexonite integer arithmetic is strictly only used in the lambda expression bound to `toInt`, which is applied after the factorial has been computed.

The actual computation is all done in church numerals, so yes, the number 362880 is indeed a function `c` that applies a supplied function `f` 362880 times. The necessary control structures (`ifthenelse`) are entirely implemented using functions too.

Recursion was a bit tricky as Parvum does not (yet) have let-bindings. As you can see in the code, I solved this by passing `facrec` a reference to itself. It does look a bit strange, especially the `self self` bit, but it works.

The price paid

I admit, I cheated (a little): The laziness mechanisms are actually implemented in C# as part of Prexonite (live in the SVN trunk). There is a new command called `thunk` which takes an expression (something callable, a lambda expression for instance) and an arbitrary number of parameters for that expression (optional). The return value is, surprise, a `Thunk` object. This object really has only one member: `force`

`force`, well, forces the evaluation of the supplied expression until some concrete value is obtained. That value, can of course contain further thunks. A lazy linked list for instance:

function _consT hT tT = [hT,tT];
function _headT xsT = xsT.force[0];
function _tailT xsT = xsT.force[1];
function _refT xT = xT.force.();

function repeatT(x)
  var xsT;
  var xsT = thunk(->_consT, x, thunk(->_refT, ->xsT));
  return xsT;

//Equivalent in Haskell:
//repeat x = let xs = x:xs in xs

Identifiers ending in `T` represent thunks (or functions that take and return thunks) whereas a prepended `_` identifies what I call "primitve expressions". They are the building blocks for more complex expressions and most are equivalents of actual assembler instructions: `_refT` for instance is the primitive for `indarg.0`, the instruction responsible for dereferencing variable references (among other things).


In the last post I justified the decision to compile to Prexonite byte code assembler with the lack of challenge when compiling into an actual high level language like Neko or JavaScript (or Haskell).

Why is it that I suddenly fear challenge? First of all, the laziness implementation in effect right now could also have been implemented in Prexonite script (in fact it has already been implemented: `create_lazy` in psr\misc.pxs is an example. (I will probably remove this structure now that I have a fully managed implementation. )

Actually, `thunk` was more difficult to write in C# than it would have been in Prexonite Script. As I mentioned, the factorial program used over 650 MiB of RAM and even though all Prexonite objects are stored on the Heap, the stack frame overhead is gigantic. I had to add a new mechanism to the FunctionContext (the actual byte code interpreter) to allow the injection of managed into the Prexonite stack.

That mechanism (the co-operative managed context) is similar to managed coroutines but behaves more like a function. Also managed code that was initially not intended to be run co-operatively (i.e. invoked as a member or as an indirect call) can “promote” itself onto the Prexonite stack.

Of course the Prexonite stack is not exactly known for its excellent performance, but unlike the managed stack, it is only limited by the amount of memory available.

Discussion Area - Leave a Comment