Prexonite 1.2.2 – Power to the macro

TL;DR The new Prexonite release 1.2.2 is here!

In the nick of time…

When I announced my 3-month milestone cycle last time, I was sure that three months were waaay too long.Well my gut feeling hasn’t betrayed me. The 1.2.2 release is late by over a week. Why? Two reasons: I had solved all the interesting problems at the beginning of the iteration, towards the end, all that was left was grunt work (re-implementing all call\* commands to support partial application). So my free time got more and more taken over by more other interesting activities, like StarCraft 2, exploring C++ and a super secret compiler-related tool/library. I only managed to finish the milestone by cutting one high-profile feature: automatic partial application of macros, but more on that later.

The Prexonite compiler architecture is really starting to show its age (read bad less-than-ideal design). It is currently very easy to have the compiler generate invalid byte code (evaluation stack underflow and/or overflow) and there is nothing I can do to prevent it, that isn’t a giant, ugly hack. I’d really love to completely rewrite the entire compiler of Prexonite (starting with the AST and code generation). But this is not something for the near future.

Macro Commands

Up until now, macros were in a very bizarre situation. For the first time, Prexonite Script code offered a feature with no equal counterpart in the managed (C#) world. Instead of functions, there are commands; Prexonite structs are mirrored by plain CLR types with IObject; Compiler hooks and custom resolvers can both be implemented in either Prexonite Script or managed languages. But for macros, there was no equivalent – until now.

Let’s face it, the first implementation of Prexonite Script macros was a hack. I mean, injecting a fixed set of ‘magic variables’ into the function smelled right from the beginning. Obviously, the new managed macros and the interpreted ones should share as much architecture as possible, have the same capabilities and protections etc. Could you have imagined the method signature of the managed mechanism had I kept those 6 variables around? And the versioning hell?!

Ever wondered why it is common practice to pack all your .NET event arguments in an EventArgs class? Same reason, you want to be able to evolve the interface without breaking the event (changing its signature). Conversely, the macro system now operates with a single value being passed to the macro implementation (whether it is written in Prexonite Script or managed code).

That one object, the macro context, provides access to the information previously available through those 6 variables. Mostly. The new interface is much more restrictive. It doesn’t give you direct access to the loader or the compiler target. This is a first step in turning macros from a compiler feature into a language/library feature. Ultimately, macros should have their own AST/internal representation, to be completely independent of the compiler.

Partial Application for macros

When designing programming language features, one should always strive for orthogonality. If there is a sensible way in which two features could be used together, it should probably be possible to use them together. This is not always easy, of course, and sometimes the one reason why you should not include a certain language feature.

In Prexonite Script, compile-time macros and partial application are kind-of at odd with one another. Macros generate new code to be substituted, while partial application creates function-values without generating new functions. These two features are not easily combined: In general macros can expand to unique and completely unrelated code. Partially applying an arbitrary block of code is not possible (control flow into and out of the block cannot be captured by a function object, not in a transparent fashion).

Consequently, macro authors have to implement partial applications manually. Let’s see how you could implement a very simple logging macro, that also supports partial application. It should accept a .NET format string, followed by an arbitrary number of arguments. When expanded it should print the name and signature of the function from which it was called (expanded), along with the containing script file name, line and column, followed by the formatted message. It should obviously be possible to disable logging, thus the macro should expand to null when the meta switch “debugging” is not set.

Now this just happens to be one of those macros that can be partially applied. It only uses the macro expansion mechanism to gather additional information (function name, source file, line and column). It doesn’t generate return statements, or needs to expand other macros in the same context. This means that you can implement the bulk of your macro in an ordinary function that takes two additional arguments: the source position and a reference calling function. The macro thus only needs to gather this information and assemble a call to the implementing function.

function log\impl(pos, func, msg) 
{ 
    var finalMsg = call(msg.Format(?),var args >> skip(3)); 
    println(pos, ", ", func, ": ", finalMsg); 
}

Generating this code via the AST wouldn’t be very fun anyway. Now on to the interesting part, the actual log macro.

macro log(msg) [is partial\macro;]
{
    if(context.Application.Meta["debugging"].Switch)
    {
        var funcStr = ast\const(context.Function.ToString); 
        var posStr = call\macro([__POS__]);
        
        var impl = ast\func(->log\impl.Id);
        impl.Call = context.Call;    
        impl.Arguments.Add(posStr);
        impl.Arguments.Add(funcStr);
        impl.Arguments.AddRange(var args);
        
        context.Block.Expression = impl;
    }
    
    return true;
}

On line 3, we check whether the debugging switch is enabled and skip the entire code generation if it isn’t. If we don’t generate any code, the macro system will supply an expression that evaluates to null.

The variable funcStr is initialized to a string literal containing the function’s name and signature (line 5). Calling ToString on the function object is necessary, because the constant node constructor ast\const expects its argument to be either a boolean, an integer, a double or a string. No implicit conversion is performed in order to avoid ambiguity.

On line 6, the source code position (file, line, column) is handled. We use a shorthand form of call\macro, where we can specify the macro call we’d like to perform in a list literal and the implementation of call\macro (a macro command since this release) will translate it into a proper invocation of call\macro. That would look somewhat like this:

var posStr = call\macro(macro\reference(__POS__));

Having to type macro\reference every time you call a macro is no fun, especially not, if the callee-macro is actually statically known (which is quite often the case). You can still supply argument lists as additional arguments, just like with all other call/* commands.

On line 8, we start assembling the invocation of our implementation function. We could have just supplied the string "log\\impl" to the function call node constructor ast\func, but the proper way to do this, is to acquire a reference to that function and retrieve its Id. This way, if we change the physical name of the function (but retain the log\impl symbol, our macro will still work.

Copying the call type of our macro to the implementation function call on line 9 is very important to maintaining the illusion of log being an ordinary function. If you write log = "hello", you will expect the "value" of this expression to be "hello", with the side-effect of logging that message. If we don’t propagate the call type of the macro expansion to our function call expression, it will default to Get, which is not what the user expects.

Arguably, a macro should have no control over arguments outside of its argument list, but the way Prexonite understands the assignment operator, the right hand side is just another argument and is thus fair game as far as the macro system is concerned. It’s also not easy to detect a forgotten propagation of context.Call: Even though most macros that are Set-called will expand to Set-call expression, this is not a necessity for a correct implementation.

On lines 10 through 12, we add our arguments to the implementation function call: the source position, the function name as well as all arguments passed to the log macro. Don’t forget to actually assign the function call to the expanded code block. I can’t remember how many times, I’ve forgotten this. The result would be the macro-system-supplied null expression. So, if your macros ever produce null, double check if you have actually returned the generated AST nodes.

Almost done. Up until now, we just wrote a normal macro with the Prexonite 1.2.2 macro system. To make it partially applicable, we need two more details. Firstly, we need to mark our macro with the macro switch partial\macro. This signals that our macro can attempt to expand a partial application. Because there are some situations, where a macro cannot be partially applied (when the one thing that needs to be known at compile-time is missing), a macro needs to be able to reject expansion.

A partial macro (short for partially applicable macro) must return true, otherwise the macro system reports this particular expansion of the macro as "not partially applicable". Unlike normal macros, partial macros cannot return their generated AST, they must assign it to context.Block and/or context.Block.Expression. Also, the same macro code is used for both partial and total applications of the macro. You can use context.IsPartialApplication to distinguish the two cases.

However, if you build your macro carefully, you might not need to make that distinction. Our log macro doesn’t use this property at all, because the code we generate is the same in both scenarios. By building an ordinary invocation of log\impl, we don’t have to worry about partial application ourselves and can instead let the rest of the compiler worry about the details.

Calling Conventions

Prexonite now provides a whole family of very similar commands: call, call\member, code\async and call\macro, collectively known as call\*. They allow you to perform calls with a larger degree of control or different semantics. With the exception of call\async, these are also the "calling conventions" found in Prexonite. call uses the IIndirectCall interface (used by ref variables and the expr.(args) syntax), call\member is used for object member calls, and call\macro allows you to invoke other macros rather than expanding them.

Unfortunately, these call\* commands didn’t play nicely with partial application. They were partially applicable, but not only in a subset of useful cases. Take this usage of call\member for instance:

call\member(obj,?,[arg,?])

You might think that it create a partial application with two mapped arguments: the name of the member to invoke as well as the second argument. All other arguments to that partial application would be treated as argument lists. Prior to Prexonite 1.2.2, this was not the case, because partial application doesn’t look for placeholder in nested expressions. The list constructor would be partially applied and passed to call\member as an argument. And since partial applications are not argument lists, this will result in a exception at runtime.

Prexontie 1.2.2 contains a new calling convention, call\star which, well, handles calling call\*-family commands/functions. While that alone isn’t very exciting, it also scans its arguments for partially applied lists and adopts their placeholders. It enables you to do this:

call\star(call\member(?),[obj,?,arg,?])

This expression will yield a partial application (even though none of call\star‘s direct arguments is a placeholder), that does what we expected our previous example to do. Of course, this is kind of unwieldy, which is why all other call\* commands have been rewritten as macros to take advantage of call\star. This means that in Prexonite 1.2.2, our example from above will actually work the way we expected it to work.

Automatic partial application compatibility

One of the features I had planned for this release, was a mechanism for automatically making some macros partially applicable. The user would have to announce (via a meta switch) that their macro conformed to certain constraints and the macro system would have automatically made it partially applicable. My idea was to take the code generated by the partially applied macro and put "extra-line" it into a separate function, sharing variables where necessary.

Unfortunately, the reality is somewhat more complicated. This kind of transplantation cannot be done in a completely transparent way. I’m also not sure, if the Prexonite compiler is equipped to perform the necessary static analysis. (It would certainly be possible, but at the cost of maintainability; a hack in other words) Finally, this is not really what partial application stands for. The promise of not generating new functions for every call site would be broken.

I had a number of ideas for dealing with this issue: Comparing functions with existing ones, trusting the user to generate the same code and re-using the "extra-lined" function after the first macro expansion. You have seen how straightforward partial macros are, when you follow the pattern of implementing the bulk of code in an implementation function. At this point, automatic partial application compatibility doesn’t seem worth pursuing. There are more important issues at hand, such as improving code re-use in macros.

The next milestone: Prexonite 1.2.3

The next milestone will focus on usability, compatibility and reliability. All very important topics (if a bit dull). I would like Prexonite to

  • compile, run and test Prexonite on Ubuntu via mono 2.8+
  • Support Unix #! script directives
  • have a unit-testing "framework" for Prexonite
  • provide automated tests for the Prexonite standard repository (psr)
  • Integrate those tests into the automated testing of Prexonite
  • Make the language a little bit more friendly
  • Automated testing of round-tripping (compile&load should be equivalent to interpreting directly)

While this doesn’t seem like a lot of new features for Prexonite itself, it will actually involve quite a lot of work. I don’t know, for instance, how I will address the issue of directory separator characters in build does require("psr\\macro.pxs"); and similar statements. Also, for #!-compatibility, I might have to completely change the command-line interface of Prx.exe. We’ll see how far I get. The fact that Ubuntu still doesn’t ship with a .NET 4.0 compatible version of mono doesn’t help.

After that?   That depends. I have some neat ideas for allowing scripts to specify minimum and maximum compiler- and runtime-versions. The macro system still needs some love, reusing code is difficult to say the least. Block expressions are a possibility, though in its current form, the compiler can’t handle those. Eventually I want to overcome this limitation, but that requires a rewrite of the entire compiler (at least large parts of it).

Project Announcement: TinCal

I’d like to announce a project of mine that has been in planning since the end of January 2010: „TinCal“. It’s a project I’ve been working on and off (mostly off) for the past ten months, besides school, Prexonite and another secret project. At this point TinCal can really become anything from vapor ware to my next „successful“ (i.e. finished) programming language (and maybe bachelor’s thesis). So yes, it is a programming language and thus the successor project to Prexonite, and as before, I’m aiming much higher now. TinCal is definitely going to be statically typed and compiling directly to Common Intermediate Language (.NET byte code). But I don’t plan to stop there: True to my love for functional programming in general and Haskell in particular, TinCal is going to be a language extremely similar to the most successful lazy programming language to date.

[Read more →]

Multithreaded Prexonite

Yesterday, I watched the Google TechTalk presentation of the programming language Go (see golang.org). Even though I don’t believe Go will become very popular, there was one aspect of the language that “inspired” me. One of Go’s major advantages is the fact that concurrency (and coordination thereof) is built deep into the language. Every function has the potential to become what they call a “goroutine” that works in parallel to all the other code. Communication and coordination is achieved through an equally simple system of synchronous channels (a buffer of size 0).

The code samples all looked so incredibly simple to me that I thought: I can do that too. I started by writing a case study (a code sample) and a sketched implementation of the synchronization mechanisms. It all looked nice on paper (in the Prexonite Script editor) but there was one major problem: None of the Prexonite execution engines is really capable of running on multiple threads. Even though the CIL compiled functions behave nicely, they don’t cover the whole language. Coroutines and other dynamic features are just too important to be ignored.

Fortunately the design of the interpreter is centered around one abstract class: the StackContext. StackContexts come in a variety of forms and provide complete access to the current state of the interpeter. For the most part, these objects are passed around in the call hierarchy. And that’s good, because functions/methods that call each other are in the same thread. So stack contexts stay thread local. At least the ones obtained via method arguments. But there is also another way to get your hands onto stack contexts and that’s the interpreters stack itself.

Of course a multithreaded Prexonite will have multiple stacks but just providing those isn’t enough. I had to ensure that direct queries to/manipulation of the stack are only directed at the stack of the currently executing thread. Since stack contexts are sometimes “captured” by other objects (coroutines for instance), they can on occasion slip into other threads and wreak havoc with the stack of their original thread.

The only solution I saw, was to use thread local storage via unnamed data slots. This has disadvantages, of course:

  • data slots are rather slow (not Reflection slow, but also not virtual call fast)
  • it is difficult to manipulate a different stack if you want to

but

  • it works!
//name~String, source~Channel
function work(name, source)
{
    println("\tWhat did $name say again?");
    var a = source.receive;
    println("\t$name said \"$(a)\"");
    return name: a;
}

function main()
{
    var source = chan; //Creates a new channel
    println("Get to Work!");
    //`call\async` invokes `work` in the backgrouund
    //`resp` is a channel that will receive the return value of `work`
    var resp = call\async(->work,["Chris",source ]);
    
    println("Oh, I forgot to tell you this: Chris said Hello!");
    source .send("Hello!"); //supply the missing value
    
    println("I'm waiting for you to finish...");
    var res = resp.receive;
    println("So your answer is $res then...");
    
    return 0;
}

results (sometimes) in

Get to Work!
        What did Chris say again?
Oh, I forgot to tell you this: Chris said Hello!
I'm waiting for you to finish...
        Chris said "Hello!"
So your answer is Chris: Hello! then...

Lists, coroutines and all other kinds of sequences appear very often in Prexonite. It is only logical to combine multithreading with the IEnumerable interface. The result is the `async_seq` command that takes an existing sequence and pushes its computation into a background thread.

The following example applies the two fantasy-functions `sumtorial` and `divtorial` to the numbers 1 through 100. The ordinary `seq` version uses can only use one processor core, while `async_seq` pushes the computation of `sumtorial` into the background and onto the second core. Even though the management overhead is gigantic, a performance improvement can be measured (a modest factor of ~X1.3).

function sumtorial(n) =
    if(n <= 1) 1
    else       n+sumtorial(n-1);
function divtorial(n) =
    if(n <= 1) 1
    else       n/divtorial(n-n/38-1);

function main()
{    
    var n = 100;
            
    var seq = 
        1.To(n) 
        >> map(->sumtorial)
        >> map(->divtorial);
   
    println("Sequential");
    println("\t",seq >> sum);

    
    var par = 
        1.To(n) 
        >> map(->sumtorial)
        >> async_seq
        >> map(->divtorial);
   
    println("Parallel");
    println("\t",par >> sum);
}

Multithreaded Prexonite currently lives in its own SVN branch as I am still experimenting with concrete implementations. A very nice improvement would be the move away from CLR threads as they don’t scale well. Communicating sequential processes spend a lot of time waiting for messages, something that does not require its own thread. Go can use 100’000 goroutines and more. I don’t even want to try this with CLR threads…

Lazy factorial

This post refers to Parvum, a research compiler/language of mine presented in the previous Post. Its a compiler written in Haskell, that translates a “lambda calculus”-like language to Prexonite byte code assembler.

Did you know that a tiny modification to Parvum makes the language (most likely) turing-complete? By adopting non-strict (lazy) evaluation, one can implement all sorts of control structures in nothing but Parvum, even without language support for boolean values and conditions.

Proof? Yes, please! Here you see the implementation of the factorial function in Parvum (currently hardwired to compute the factorial of 8):

(\bind.
  bind (\n. n (\x. x + 1) 0) \toInt.
  bind (\f x. x) \zero.
  bind (\f x. f x) \one.
  bind (\f x. f (f (f (f x)))) \four.
  bind (\n f x. f (n f x)) \succ.
  bind (\p a b. p a b) \ifthenelse.
  bind (\x y. x) \true.
  bind zero \false.
  bind (\n. n (\x. false) true) \isZero.
  bind (\n m. m succ n) \plus.
  bind (\n. n (\g k. isZero (g one) k (plus (g k) one) ) (\v. zero) zero) \pred.
  bind (\m n f. m (n f)) \mul .
  bind (\self n. 
    ifthenelse (isZero n)
      (one)
      (mul n (self self (pred n)))
    ) \facrec.
  bind (\n. facrec facrec n) \fac.

  toInt (fac (succ (succ (succ (succ (succ four))))))
)(\var body. body var)

76 seconds and a peak working set of over 650 MiB later, the number 362880 appears on the console. It works indeed.

I spare you the compiled byte code as it consists of 47 functions. Note that Prexonite integer arithmetic is strictly only used in the lambda expression bound to `toInt`, which is applied after the factorial has been computed.

The actual computation is all done in church numerals, so yes, the number 362880 is indeed a function `c` that applies a supplied function `f` 362880 times. The necessary control structures (`ifthenelse`) are entirely implemented using functions too.

Recursion was a bit tricky as Parvum does not (yet) have let-bindings. As you can see in the code, I solved this by passing `facrec` a reference to itself. It does look a bit strange, especially the `self self` bit, but it works.

The price paid

I admit, I cheated (a little): The laziness mechanisms are actually implemented in C# as part of Prexonite (live in the SVN trunk). There is a new command called `thunk` which takes an expression (something callable, a lambda expression for instance) and an arbitrary number of parameters for that expression (optional). The return value is, surprise, a `Thunk` object. This object really has only one member: `force`

`force`, well, forces the evaluation of the supplied expression until some concrete value is obtained. That value, can of course contain further thunks. A lazy linked list for instance:

function _consT hT tT = [hT,tT];
function _headT xsT = xsT.force[0];
function _tailT xsT = xsT.force[1];
function _refT xT = xT.force.();

function repeatT(x)
{
  var xsT;
  var xsT = thunk(->_consT, x, thunk(->_refT, ->xsT));
  return xsT;
}

//Equivalent in Haskell:
//repeat x = let xs = x:xs in xs

Identifiers ending in `T` represent thunks (or functions that take and return thunks) whereas a prepended `_` identifies what I call "primitve expressions". They are the building blocks for more complex expressions and most are equivalents of actual assembler instructions: `_refT` for instance is the primitive for `indarg.0`, the instruction responsible for dereferencing variable references (among other things).

Justification

In the last post I justified the decision to compile to Prexonite byte code assembler with the lack of challenge when compiling into an actual high level language like Neko or JavaScript (or Haskell).

Why is it that I suddenly fear challenge? First of all, the laziness implementation in effect right now could also have been implemented in Prexonite script (in fact it has already been implemented: `create_lazy` in psr\misc.pxs is an example. (I will probably remove this structure now that I have a fully managed implementation. )

Actually, `thunk` was more difficult to write in C# than it would have been in Prexonite Script. As I mentioned, the factorial program used over 650 MiB of RAM and even though all Prexonite objects are stored on the Heap, the stack frame overhead is gigantic. I had to add a new mechanism to the FunctionContext (the actual byte code interpreter) to allow the injection of managed into the Prexonite stack.

That mechanism (the co-operative managed context) is similar to managed coroutines but behaves more like a function. Also managed code that was initially not intended to be run co-operatively (i.e. invoked as a member or as an indirect call) can “promote” itself onto the Prexonite stack.

Of course the Prexonite stack is not exactly known for its excellent performance, but unlike the managed stack, it is only limited by the amount of memory available.

A truly insignificant post

I’ve been toying around with Haskell lately, after I bought the excellent book Real World Haskell (there is also an online version where readers can comment on every paragraph). Every programmer should enjoy programming with a rigid but powerful type system like Haskell’s once in his or her career. It really puts things into perspective. Sure Ruby and Python are cool for quick hacking but nothing beats knowing that GHC accepts a program.

Indeed, I do catch most of my coding mistakes at compile time. Of course nothing prevents you from confusing (1-x) with (x-1) but Haskell really lets you focus on the actual code (once you get your program past the type checker).

But this post isn’t about me being a new fan of Haskell, but something much more insignificant: Yesterday I’ve built a tiny compiler. For a tiny language.

Parvum

Its called Parvum (which is Latin for, well, insignificant) and translates expressions in a “lambda calculus”-like language to a corresponding program in Prexonite bytecode assembler:

(bind.bind (x. x + 5) lambda.bind 2 calculus.lambda calculus)(x. f. f  x)

I’ve taken the freedom to add integer literals and basic arithmetic since they can be expressed in lambda calculus anyway (via church numerals). If you want proof:

(bind.bind (f. x. x) zero.bind (f. x. f x) one.bind (f. x. f (f x)) two.bind (n. f. x. f (n f x)) succ.bind (n. m. m succ n) plus.bind (n. n (x. x + 1) 0) toLiteral.toLiteral (plus one (succ two)))(val. body. body val)

Just by using `one` and `succ` you can already express every natural number. Those church numerals can be mapped to any other number system by supplying the successor function and the zero element. In case of the built-in integers, that would be `x. x + 1` and `0` respectively.

And this example indeed results in “4” just as expected.

I would love to reproduce the compiled assembler code here, but it not pretty. The code consists of over 20 tiny functions, one for each `.` in the code above. And yes there is no shorthand for curried multi-parameter functions. This is an insignificant language after all.

Now why the effort?

Implementing Parvum was an interesting experience in Haskell. I got to use the parser combinator library `parsec` which embeds nicely into Haskell. Working with parsec really is great because you use one environment for the whole compiler. No separate grammar file and no machine generated code files. If I need a convenience function for my grammar (like a parameterised production) I just go ahead and create it.

Unlike all my previous parsers, this one really focuses on building an AST solely. No name lookups, no transformations, just an abstract syntax tree. This tree is then fed into the compiler, which translates it into a representation of the program not entirely unlike what Prexonite uses: There are applications which consist of global variables and functions which in turn have parameters, local variables, shared names (via closures) and byte code. This model of the application is then printed via the HughesPJ pretty printer library.

Apart from the highly ambiguous grammar of lambda calculus notation the biggest problem was to implement the translation step. Not because walking an AST in post-order  is particularly difficult (Prexonite assembler is stack based) but because emitting embedded functions while generating code is not trivial.

  • First of all, you need a steady supply of unique names: state monad
  • You need to keep track of the current function environment (for shared names): reader monad and the local function
  • Finally the embedded functions have to be stored in the application: writer monad

which results in quite a monster of a monad: ReaderT PxsFunc (WriterT [PxsFunc] (State Int))

I’m not really happy with this solution as it is yet another example of how you can duck behind imperative concepts once things get a bit more complicated. In this particular case, the decision to design the application model (application contains functions and global variables etc.) like I did in C# is probably the culprint. In Prexonite, I build these data structures incrementally, adding and tweaking bits here and there. This just isn’t the way to go in Haskell. In Haskell you’re supposed find one final expression for every value, which in this case proved to be far from trivial.

Why Prexonite?

By now there are really a lot of options for compiler back ends and/or platforms. Ranging from the Java VM, the CLR, Neko and LLVM to old C. Why would Prexonite be just the right choice?

Since Parvum is just a prototype I needed a high level back end which basically reduced the choice down to

  • Neko
  • Prexonite
  • any actual programming language

In particular, the back end had to support closures out of the box. Out of these I chose Prexonite bytecode assembler because it is high level but still requires a translation effort (I actually want to learn something implementing Parvum) and because it resembles CIL to some extent.
The latter is important because I might implement my next serious language on top of the CLR directly instead of the DLR.

The future

No, Parvum is not intended to replace Prexonite as my workhorse  language of choice. It really is just a research object.

I plan to investigate the following:

  1. Arity checking
    • (1 + 5) :: *
    • (x. x + 1) :: * –> *
    • (x. y. x + y) 5 :: * –> *
    • etc.
  2. Extend language with 2nd data structure: Bool
    • Allows for predicates (comparison, and, not etc.)
    • Conditions
    • Makes finite recursion possible
  3. Simple type checker (Int vs. Bool) in addition to arity checking
    • My first static type checker *ever*

Unfortunately Parvum is strict, at least until I’ve found a way to encode lazy computations in the strict Prexonite VM. This is one place where I might cheat and just provide a `thunk` mechanism as part of Prexonite (say a command that wraps a computation and its arguments). We’ll see.

ACFactory Day 5: Welcome to Paper World

This is the fifth part of a series documenting the development of ‘ACFactory’ (ACFabrik in German), an application that generates printable character sheets for the Pen & Paper role playing game Arcane Codex (English page).You might also want to read ACFactory Day 4: Command & Control.ACFactory prototype showing two almost entirely empty pages.Pixels in WPF are device-independent and set at 96 pixels per inch. This means, that one WPF-pixel corresponds to one device-pixel if and only if the resolution of the device is 96 dpi. Despite sounding complicated, this is a good thing, because WPF pixels have fixed dimensions. One pixel equals 1/96th of an inch, or 3.77952755905512 pixels are equal to one millimetre. (I’m sorry, but who ever invented inches should be slapped). Naturally, it would not be practical to author the talent sheet in WPF-pixels, constantly having to convert back and forth using a calculator or a table (or both: Excel).ACFactory Zoom controlIt would be nice to let someone else worry about my coordinate system of choice, someone like WPF. Unlike GDI+, WPF does not provide explicit coordinate system transformations, probably for a good reason. What you can do, are layout transforms. The scale transform looks particularly promising. Initialised with my magic number, I could author my sheets in millimetres and WPF converts them into pixels. There is one problem there: ScaleTransform not only transforms the coordinate system, but everything inside the affected control, including font sizes. While in 99 out 100 cases, this is the expected behaviour, in my exact scenario it’s not. Cambria 12pt should render like Cambria 12pt would render on paper. However, these normal-looking 12 points are scaled to giant 45.3354… points by the ScaleTransform.Maybe the StackOverflow.com community knows an answer. Until then, I created the hopefully temporary markup extension PaperFontSize, that automatically reverses the effects of such a ScaleTransform.Having everything set up, I can finally start implementing the talent sheet.

ACFactory Day 4: Command&Control

This is the fourth part of a series documenting the development of ‘ACFactory’ (ACFabrik in German), an application that generates printable character sheets for the Pen & Paper role playing game Arcane Codex (English page).

You might also want to read Day 3: Authoring XAML.

Today, I was mostly lost in the vast unknown jungle that WPF is, even after having read “Windows Presentation Foundation: Unleashed by Adam Nathan” (I really recommend it!).

Control

First there was the difference between UserControl and custom control. (No, I am not very familiar with any other UI framework). Whereas UserControls are little more than include on steroids (you get a code behind file), if you *really* want to create something new or abstract over a composition of controls, then creating custom controls is your only option.

But custom controls are just C# (or VB.NET) code files. There is no XAML involved. How can that be the preferred way to author new controls? Remember that WPF controls are supposed to be look-less, platonic ideas of what they represent.
I wanted to create a zoom view control. What are the abstract properties of such a zoom view control?

  1. It has content
  2. It can magnify its content

Number 1 tells us to derive from ContentControl, the type that defines the Content property. Number 2 is a bit trickier. I decided that my control has a ZoomFactor property (type double, 1.0 == 100%) to which a ScaleTransform is bound. Whether or not this works, I am not exactly sure as the control is not working yet.

But how does the control look? Well that’s not the controls concern. A look is provided by the Themes/Generic.xaml resource dictionary, the default fallback in the absence of local definitions and system specific themes. In my case there is going to be a neat little zoom control (combo box + slider) hovering in the top left corner.

Command

To establish communication between the ZoomViewer template and the ZoomViewer control, there is really only one good mechanism: Commands. Commands are yet another abstraction that makes event handling more modular. Controls like buttons, hyperlinks and menu item can be “bound” to a certain command. That command determines whether they are enabled or not and what happens when they are clicked. You could for instance bind the menu item Edit > Paste, a toolbar button and Ctr+V to the Application.Paste command and they would all automatically be activated/deactivated depending on the state of the clipboard.

Even better, the default implementation, RoutedCommands, work just like routed events and bubble up the tree of your XAML interface. You can then define different command bindings at different locations in your UI. The best of all: Via the command target property, you can tell the command routing where to look for command bindings. I could have two buttons, that both invoke the Navigation.Zoom command, but on two different ZoomViewers.

My ZoomViewer does support the Navigation.IncreaseZoom, .DecreaseZoom and .Zoom commands. This is how the default control template can communicate with the ZoomViewer, by invoking those commands.

There is however one thing, I found very irritating: neither the slider nor the combo box implement commands by default. The msdn contains a sample, that shows how to do this. It turns out to come with quite a few things to watch out for:

  • You must differentiate between routed and ordinary commands as only the former can react to command bindings and can be set to originate from different InputElements.
  • You should rather pass a reference to the invoking control than null as the command target.
  • You must be careful with the CanExecuteChanged event handler. It must be correctly unset, when the command is changed/removed.

How well this all turns out, will hoepfully see soon. Development right now is a bit sluggish as I keep switching over to msdn and/or my book for reference, since Visual Studios XAML editor is not very sophisticated, even with basic ReSharper support. This must get much better in VS10. Up until now, I have observed VS08SP1 only crash 3 times (twice due to a recursive binding *blush*) and once with an HRESULT of E_FAIL (whatever that exactly was). But at least I lost no code.

Oh, and why exactly the Microsoft Blend XAML editor does not provide any support is totally beyond me. I mean free (that means it costs nothing) tools provide better code completion than Blend: Kaxaml, the “better XamlPad”. Even though it takes some time to load, I can definitely recommend it.

SealedSun.GamePanel June08

[ Download SealedSun.Gamepanel June08]

SealedSun.GamePanel is a high level wrapper around the managed LcdInterface by http://www.cabhair-cainte.com/ismiselemeas/ that features a WinForms-like controls system as well as simplified event routing for the four soft buttons of the G15.The library is not very mature and only the most commonly used features are implemented. Everyone with basic knowledge of System.Drawing should be able to implement custom functionality via user controls.

Quick Start

1. Create a new LcdScreen

The LcdScreen represents an entry in GamePanel manager and can be selected by the user via the application switch button on the Keyboard.

using(var screen = new LcdScreen())
// {...}

2. Open the LcdScreen

To notify the GamePanel manager of the new screen, call the Open method.

screen.Open("My LCD");

3. Create a new LcdPage

An LCD page is like an empty canvas. It is the container for all your UI elements.

var page = new LcdPage(screen);

4. Add controls to the LcdPage

There are currently only two built-in general purpose controls: LcdLabel and LcdImage. In order for your controls to show up, you need to add them to the page object. You can also nest controls by adding them to other controls. Always keep in mind, that the positioning is relative to the parent control and that controls cannot exceed the bounds of their container.

var lTitle = new LcdLabel();
lTitle.Text = "My LCD";
lTitle.Location = new Point(10,10);
page.Controls.Add(lTitle);

5. Configure soft buttons

The page object provides access to the four soft buttons via its Button0,1,2,3 members. Those special controls provide ‘button pressed’ events and are always located immediately above the respective buttons. Their Text and Image properties can be used to label them. If you need better control over the labelling, you can add child controls to the soft buttons like to any other control.

page.Button0.Pressed += (sender,e) => {
  Console.WriteLine("0 Pressed!");
};

6. Select active page

In order for a page to be displayed on a screen, it must be assigned to its CurrentPage property. Only pages that are currently active will fire events.

screen.CurrentPage = page;

7. Enter message loop

Like a windows forms application, an LCD application too has a message loop. Similar to Application.Run, screen.Run will use the current thread to process messages from the GamePanel manager.

From here on…

If you want to get the users attention, you can change the pages screen priority to Alert for a short amount of time. A convenient way to do so is the TemporaryAlert method. This of course only works, if the user allows high priority screens.

page.TemporaryAlert(500);

If you want to quit the message loop, either close the screen or call Screen.ExitMessageLoop.

License and Disclaimer

Except for the components mentioned in the ExternalResources.txt, I hereby release everything the zip archive under the terms of the Creative Commons Attribution-Share Alike 2.5 Switzerland license.The software was written over the course of a few days and did therefor not undergo much testing. It is most likely that you will encounter bugs and unexpected behaviour. You use the library at your own risk.

Creative Commons License

SealedSun.GamePanel by Christian Klauser is licensed under a Creative Commons Attribution-Share Alike 2.5 Switzerland License.

Upcoming: Event-based G15 LCD library


Warning: require_once(/var/www/sealedsun.ch/press/wp-content/includes/../plugins/igsyntax-hiliter/syntax_hilite.php): failed to open stream: No such file or directory in /var/www/sealedsun.ch/press/wp-content/includes/csharp.php on line 3

Fatal error: require_once(): Failed opening required '/var/www/sealedsun.ch/press/wp-content/includes/../plugins/igsyntax-hiliter/syntax_hilite.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/sealedsun.ch/press/wp-content/includes/csharp.php on line 3