Saturday, April 6, 2013

Lazy like a hummingbird

In my travels through the world of high level languages, the main language that caught my attention was Common Lisp.  I was drawn to it because it was the most powerful language I had ever encountered.  With macros, the full language available at compile time, restartable exceptions, the most powerful object system in computer science, it was hard to imagine a more expressive language.

But then I started looking into Haskell.  I've come to the conclusion that it is on equal footing with my beloved Common Lisp, if not beyond.  And now I'm starting to do work in it.  I had avoided that for a long time because while I realised Haskell was powerful, I was afraid the type system (good though it is) would be constantly in my way, reducing the value of the overall power of the language to me [1].  After using it for a few days now, I find that is not the case at all.  In fact, after one gets past the initial frustration of code not immediately compiling the type system becomes quite liberating.

I'm coding in Eclipse with the Haskell extension so the IDE can compile the code as I'm writing it and I get near instant feedback on any type errors.  Eclipse is pretty shoddy and not nearly as powerful as the Visual Studio+Resharper combination I work with in my day job but it saves time and appears to be the best option currently.  With this setup I can just make what ever changes I want and wait for the IDE to tell me what all I have to update.

Type Analogous

After spending some time building in Haskell I've come to realise that the Type system is to programming as Object Orientation is to global variables.  That is, OO didn't remove global variables it simply demarcated them.  It's still possible that a variable ends up with an unexpected value caused by some member function with no quick way of determining how it happened [2], but we can isolate exactly which methods could be responsible because non-members can't access the member variables at all.

The type system is similar: you can still have what ever types you want.  If you want a variable that can be an int, a float, a string or a character and automatically convert between the two you can do that.  But what Haskell's type system does for you is demarcate where this can happen.  You can still end up with a character where you expected a float, but only in those places were you've chosen those kinds of types.  Just as OO limits your global variable surface area, Haskell limits your type based error surface area.

All dressed up...

So what am I going to do exactly.  I'm planning on making a content management system in Haskell. :) I know about the web systems out there but none of them work exactly as I want.  I don't like most CMS because they require so many different technologies (i.e. moving parts) to get into production.  They all have to be front ended by some web server, a caching server, etc., etc. if they hope to achieve any kind of scale at all.

Haskell, in contrast, has Warp.  A fast web serving base that can break the dreaded 10k/s barrier.  Hopefully I can successfully leverage it to create something nice and unique.  If it doesn't work out, I'm sure I'll learn a lot at least.

My next series of posts will be documenting building the system so I suppose they can function as a kind of practical newbie guide [3].  You can also follow my progress (or lack thereof) via my repository.

[1] This might seem like an odd sentiment given that my day job is C# coding, but the C# type system isn't as strict as Haskell's.

[2] Well, a watch statement is pretty quick.  But seeing the offender in the stack trace is even easier.

[3] or maybe anti-guide!  If you want to use haskell use a pro like "Learn you a haskell" or "Real world haskel")

Tuesday, March 12, 2013

Dynamic scope pattern

[Noticed this was unpublished so I'll just publish it now]

Recently I noticed some confusion about dynamic scoping on Reddit. Many of the comments were even bashing dynamic scoping as something one would never need. The fact of the matter is you probably use dynamic scoping already, you just do it by hand.

What are global variables?

To understand what dynamic scoping is and why you are using it now, we need to understand global variables. So what is a global variable? A global variable is an implicit parameter to one or more functions [1]. Rather then pass some variable to 20 different functions, most of which simply pass it on to the functions they call, you make a global variable.

From this definition we can see that OO programing also has "global" variables. They are just restricted to specific classes (class/static variables) or instances of a class (instance variables).

Global variables, while convenient, a double edged sword. The problem is that any of these functions could change the variable without leaving a trace. That is, given the following:

int important_variable = 100;
void main()

If the code crashes somewhere in second() or third() due to important_variable being changed, it is difficult to tell what changed it. If some function called by first() changed the variable then by the time one of the later functions crashes, the offender wont be on the stack anymore.

Dynamic scoping - global variables done right
One thing that would help with the above problem is if we could say that important_variable has a fixed value and can only be rebound within a specific context. This way, if our code crashes because important_variable was changed we can look at the back trace. The offending function will have to be in it. In C++ we might do something like this:

template class DynamicSet {
Value* _var;
Value _oldVal;

DynamicSet(Value* var, Value newValue) : _var(var), _oldVal(*var) {
*_var = newValue;

~DynamicSet() { *_var = _oldVal; }

void first()
DynamicSet(&important_variable, 9);
// ...

Here we take advantage of C++'s destructors to make sure the variable is put back as it was no matter how the current stack frame is exited. This is how dynamic scoping works.

Relationship between exception handling and dynamic variables

The reddit thread also mentioned exception handling. The interesting thing here is that exception handling can be implemented in terms of dynamic scoped variables (some languages do exactly that) and dynamic variables can be implemented with exceptions if you have restartable exceptions (I've seen a dynamic variable implementation in Squeak that worked like this).

[1] Well, it could be zero, but that wouldn't make much sense. :)

Wednesday, December 5, 2007

Defining new language constructs

I was recently involved in a conversation about the flexibility of different languages. Once a person has used a language like Smalltalk or Lisp, one measure they will probably use will be how hard it is to add new constructs to the language that behave as "built in" ones.

In Smalltalk this is easy because all constructs are written within the language using blocks (which serve the purpose of "closures"), so any methods you define can easily behave as standard constructs that come with the language. It also helps that you can add your methods to any class, including existing ones like True where certain operations make more sense.

In other languages this takes more thought because the constructs don't use explicitly delayed execution the way Smalltalk does. For example the lisp code:

> (if (some-function) (true-case) (false-case))

may look like a regular function call but the last two forms don't get evaluated until the first one completes, at which point only one of them will be.

Now lets imagine in one of our projects we are seeing a great deal of code that looks like:

> (if (and (case-one) (case-two))
> (true-case)
> (false-case))

but perhaps it would be clearer to make a new construct called 'ifboth. E.g.

> (ifboth ((case-one) (case-two)) (true-case) (false-case))

But how can we do this? We can't use a function because that would evaluate all these forms before calling our new construct. Luckily Lisp comes to the rescue with macros. A macro receives the forms unevaluated as lists, and is free to do what it needs with them. It simply needs to return a list that will be the expanded code. In our case the macro is pretty simple:

> (defmacro ifboth ((case-one case-two) true-case false-case)
> `(if (and ,case-one ,case-two)
> ,true-case
> ,false-case))

I wont go into the details of how this works as there are plenty of resources on the subject. The above simple takes code like:

> (ifboth ((case-one) (case-two)) (true-case) (false-case))

And turns it into code like:

> (if (and (case-one) (case-two))
> (true-case)
> (false-case))

at compile time.

But this got me thinking about Haskell. Can Haskell do this? They don't have macros after all. 1

It turns out Haskell can. Lisp has to use macros because it needs a way of controlling the evaluation of the forms. 2 In Haskell forms are always delayed until actually used. So how can we make ifboth for Haskell?

> ifboth :: Bool -> Bool -> a -> a -> a
> ifboth True True trueCase = trueCase
> ifboth _ _ _ falseCase = falseCase

And test it in ghci with:

> ifboth True True (print "hi") (print "bye") {- prints "hi" -}
> ifboth True False (print "hi") (print "bye") {- prints "bye" -}

If one spent a while programming in Haskell with it's lazy evaluated, "only evaluate if the value is actually used" nature, one might forget that defining new language constructs actually requires extra thought in some languages. And isn't even possible in others.

1. Actually they do have several different implementations of "macro"-ish systems, but they are not needed for things like delayed evaluation as the language simply works this way.

2. Of course Lisp macros have many uses beyond this, but the delayed evaluation is handy in this case.

Monday, May 14, 2007

Keeping focus

One of things I love about the Smalltalk system is how it lets me keep focus on what I'm doing. Working on my friend's website, there are lots of pieces of data that I make pages to display, as any programmer can relate. And also as any programmer can relate, I have the issue of how to populate this "model data" to test if my display pages are working.

So I have two choices: (1) I can create the whole GUI that lets one enter data or (2) I can put some extra code in my program to populate the data, for testing.

The problem with 2 is not big, just that I'm wasting time writing stuff I will have to take out later, just so I can test. Choice 1 doesn't have that problem, but it does break my focus. I have to stop working on what I am really interested in: the display pages, to work on a data input page(s).

But Smalltalk has another option (my favorite in fact): I can simply navigate through the live web site objects (via 'find instance of') until I find pages and insert the model data directly.

This way I can keep on working on what I am focussed on right now. In traditional languages you must choose one of the above 2 options, but in Smalltalk option 2 has been done for you generally in the tools.

Saturday, May 12, 2007


Here is a post I made recently at

Basically my summary of the state of concurrent processing today (reproduced below).

Afaik there are 3 ways of handling true concurrent execution (i.e. not green

1) Fine-grained locking/shared thread state:

The old way of running heavy weight threads, sharing memory across threads
and using some kind of locking to protect against race conditions.

Positive: Hrm, well I guess there is the most support for this, since it is
probably the most common. If you don't use any locking but only read the
data shared this is very fast approach.

Negative: It simply doesn't scale well. It also doesn't compose well. You
can't simply put two independently created pieces of code together that use
locking and expect it to work. Stated another way, fine-grained locking is
the manual memory management of concurrency methodologies [1]. If any part
of your code is doing fine-grain locking, you can never "just use it"
somewhere else. You have to dig deep down in every method to make sure you
aren't going to cause a deadlock.

This one would probably be very hard to add to Squeak based on what John

2) Shared state, transactional memory:

Think relational database. You stick a series of code in an "atomic" block
and the system does what it has to to make it appear as the memory changes
occurred atomically.

Positive: This approach affords some composability. You still should know
if the methods your calling are going to operate on memory, but in the case
that you put two pieces of code together that you know will, you can just
slap an atomic block around them and it works. The system can also ensure
that nested atomic blocks work as expected to further aid composability.
This approach can often require very few changes to existing code to make it
thread safe. And finally, you can still have all (most?) of the benefits of
thread-shared memory without having to give up so much abstraction (i.e.
work at such a low level).

Negative: To continue the above analogy, I consider this one the "reference
counted memory management" of options. That is, it works as expected, but
can end up taking more resources and time in the end. My concern with this
approach is that it still does need some knowledge of what the code you are
calling does at a lower level. And most people aren't going to want to
worry about it so they will just stick "atomic" everywhere. That probably
wont hurt anything, but it forces the system to keep track of a lot more
things then it should, and this bookkeeping is not free.

This one would also require some VM (probably very big) changes to support
and could be tough to get right.

3) Share nothing message passing:

Basically, no threads, only independent processes that send messages between
each other to get work done.

Positive: This approach also allows a high level of composability. If you
get new requirements, you typically add new processes to deal with them.
And at last, you don't have to think about what the other "guy" is going to
do. A system designed in this manner is very scalable; in Erlang for
example, a message send doesn't have to worry if it is sending to a local
process or a totally different computer. A message send is a message send.
There is no locking at all in this system, so no process is sleeping waiting
for some other process to get finished with a resource it wants (low level
concerns). Instead a process will block waiting for another process to give
him work (high level concerns).

Negative: This requires a new way of architecting in the places that use
it. What we are used to is; call a function and wait for an answer. An
approach like this works best if your message senders never care about
answers. The "main loop" sends out work, the consumers consume it and
generate output that they send to other consumers (i.e. not the main loop).
In some cases, what we would normally do in a method is done in a whole
other process. Code that uses this in smalltalk will also have to take
care, as we *do* have state that could leak to local processes. We would
either have to make a big change how #fork and co. work today to ensure no
information can be shared, or we would have to take care in our coding that
we don't make changes to data that might be shared.

I think this one would be, by far, the easiest to add to Squeak (unless we
have to change #fork and co, of course). I think the same code that writes
out objects to a file could be used to serialize them over the network. The
system/package/whatever can check the recipient of a message send to decide
if it is a local call that doesn't need to be serialized or not.

[1] The big win usually cited for GC's is something to the effect of "well,
people forget to clean up after themselves and this frees up their time by
not making them". But really, the big win was composability. In any
GC-less system, it is always a nightmare of who has responsibility for
deleting what, when. You can't just use a new vendor API, you have to know
if it cleans up after itself, do I have to do it, is there some API I call?
With a GC you just forget about it, use the API and everything works.

Friday, February 2, 2007

A case of case

In all the documentation I have seen on Smalltalk, everyone always points out how to use #ifTrue:ifFalse and why it works this way. Ok, got it. But at some point, one needs more branching control then just if/else.

Now I know; anytime you branch you could accomplish the same thing using method overriding [1], but it isn't always practicle to break out a new class for a one time thing. This is the reason there is a #ifTrue:ifFalse in the first place, no doubt.

But what we can learn from #ifTrue:ifFalse is, if there is something the language doesn't provide that you need, just put it in. :) I was writting a complicated function some time back where I ran into just this case. I was having to nest if statements, but the problem was, each condition was complex. So several of the conditions had to be repeated in different if statements. Obvious code smell. "If only I Smalltalk had case!" I thought. Wait a minute, why don't I just write one? I didn't handle the general case, I just made a method that handled my specific case.

onFirstComplexCase: aFirstBlock orSecond: aSecondBlock else: aLastBlock

Of course using this method instantly made a 20 line [2], complicated and thus, unclear method into a short, very readable one. And then today, while tracking something else in the Object class I found that someone else thought of this first and did the more general solution. :)

caseOf: anAssociationCollection otherwise: aFailBlock

anAssociationCollection associationsDo: [:assoc|
assoc key value = self ifTrue: [ ^ assoc value ]
aFailBlock value.

Very nice. This lets us do something like:

caseOf: {
[ #clicked ] -> [ self handleClick ].
[ #resized ] -> [ self handleResize ].
" ... "
otherwise: [ self error: 'unknown event: ', event asString ]

Very cool. It looks like an ML style caseOf statement. But this still doesn't handle my case does it? My cases are much more complicated then just comparing a class to something. You have, no doubt, figured this out, but just in case; how does the case method work again? It evaluates the key and compares that result to the recieving object. So if we have a more complicated case we could just go:

caseOf: {
[ self hasThisTrait and: [ Date today month = 12 ] ] -> [ self goCrazy ]
" ... "
otherwise: [ self error: 'logic error' ]

[1] An if statement basically creates a branch in execution. This can also be done with a class hierarchy and method overloading (and in fact this is how the boolean hierarchy works in smalltalk).

[2] In Smalltalk, if your method gets over 5-10 lines it is time to start looking at some refactoring. I didn't completely believe them initially when they told me, but so far it has been the case nearly every time.

Watching the watchmen

Talk of monitoring tools today got me to thinking.

Back in my C/C++ days, efficiency was my main concern. I wanted things to go as fast as possible. So I couldn't be spending instructions to do things like logging, monitoring my own performance, etc. And my code isn't going to crash anyway, so what's the point, right? Wrong. Of course my system was perfect :), but it talked to other systems that weren't. This meant my programs just didn't seem to do anything. We were completely flying blind since there was no indication of any sort of what it was doing. That was always the main push of my manager at the team (to all of us, not just me): make the systems better at monitoring themselves.

Fast forward a few years and I am in a meeting hearing about monitoring tools that will be able to tell us things we need to know this time. This got me thinking about the nature of this beast. When you write a tool, you are putting in extra code to expose a part of your system. A different interface, or view :) then the normal. You have to, first, guess what might be a problem area so that you can write the interfaces to show it. But, as was implicitly mentioned in this meeting, you are often wrong in your guess about this. Or at least you don't expose enough.

Our systems use the popular *4log libraries to handle logging via configuration. What most programmers do with this is put a log message at the start and end of every (!!!) function. So if you turn on debugging you will literally see every single call the system makes. Is there no better way to do this?

I think there is. In smalltalk, a live system, I can view any part of the code any time I want. This is a good thing, but it was the first thing I saw as a negative when I started using smalltalk. So inefficient with so much of the system exposed! But is it? It has reflection, a way for us to use the system to ask the system things about itself. It's not some hack and it's not something you can "turn off" for production. It is part of it from top to bottom. It also has viewers that are designed to use this reflection to show us parts of the system. While it's running.

So if you think about it, they did the same thing my manager ask me to do years ago, and the same thing the developers at my company are trying to do now: they are giving ways to monitor the system. But the difference is, instead of guessing which parts of the system might have problems that we need to look at, they just gave us the whole thing.

As far as efficiency, sure, raw C++ (and even Java) may be more performant then raw smalltalk. But if end the end you wind up doing things like calling two logging functions per normal function, every time, then how much more efficient is your code? After all, Smalltalk isn't calling into the viewers for every function. Only if you use them. And you never have to go back and add new monitoring tools because you guessed wrong.