Wednesday, December 5, 2007

Defining new language constructs

I was recently involved in a conversation about the flexibility of different languages. Once a person has used a language like Smalltalk or Lisp, one measure they will probably use will be how hard it is to add new constructs to the language that behave as "built in" ones.

In Smalltalk this is easy because all constructs are written within the language using blocks (which serve the purpose of "closures"), so any methods you define can easily behave as standard constructs that come with the language. It also helps that you can add your methods to any class, including existing ones like True where certain operations make more sense.

In other languages this takes more thought because the constructs don't use explicitly delayed execution the way Smalltalk does. For example the lisp code:


> (if (some-function) (true-case) (false-case))


may look like a regular function call but the last two forms don't get evaluated until the first one completes, at which point only one of them will be.

Now lets imagine in one of our projects we are seeing a great deal of code that looks like:


> (if (and (case-one) (case-two))
> (true-case)
> (false-case))


but perhaps it would be clearer to make a new construct called 'ifboth. E.g.


> (ifboth ((case-one) (case-two)) (true-case) (false-case))


But how can we do this? We can't use a function because that would evaluate all these forms before calling our new construct. Luckily Lisp comes to the rescue with macros. A macro receives the forms unevaluated as lists, and is free to do what it needs with them. It simply needs to return a list that will be the expanded code. In our case the macro is pretty simple:


> (defmacro ifboth ((case-one case-two) true-case false-case)
> `(if (and ,case-one ,case-two)
> ,true-case
> ,false-case))


I wont go into the details of how this works as there are plenty of resources on the subject. The above simple takes code like:


> (ifboth ((case-one) (case-two)) (true-case) (false-case))


And turns it into code like:


> (if (and (case-one) (case-two))
> (true-case)
> (false-case))


at compile time.

But this got me thinking about Haskell. Can Haskell do this? They don't have macros after all. 1

It turns out Haskell can. Lisp has to use macros because it needs a way of controlling the evaluation of the forms. 2 In Haskell forms are always delayed until actually used. So how can we make ifboth for Haskell?


> ifboth :: Bool -> Bool -> a -> a -> a
> ifboth True True trueCase = trueCase
> ifboth _ _ _ falseCase = falseCase


And test it in ghci with:


> ifboth True True (print "hi") (print "bye") {- prints "hi" -}
> ifboth True False (print "hi") (print "bye") {- prints "bye" -}


If one spent a while programming in Haskell with it's lazy evaluated, "only evaluate if the value is actually used" nature, one might forget that defining new language constructs actually requires extra thought in some languages. And isn't even possible in others.

1. Actually they do have several different implementations of "macro"-ish systems, but they are not needed for things like delayed evaluation as the language simply works this way.

2. Of course Lisp macros have many uses beyond this, but the delayed evaluation is handy in this case.

Monday, May 14, 2007

Keeping focus

One of things I love about the Smalltalk system is how it lets me keep focus on what I'm doing. Working on my friend's website, there are lots of pieces of data that I make pages to display, as any programmer can relate. And also as any programmer can relate, I have the issue of how to populate this "model data" to test if my display pages are working.

So I have two choices: (1) I can create the whole GUI that lets one enter data or (2) I can put some extra code in my program to populate the data, for testing.

The problem with 2 is not big, just that I'm wasting time writing stuff I will have to take out later, just so I can test. Choice 1 doesn't have that problem, but it does break my focus. I have to stop working on what I am really interested in: the display pages, to work on a data input page(s).

But Smalltalk has another option (my favorite in fact): I can simply navigate through the live web site objects (via 'find instance of') until I find pages and insert the model data directly.

This way I can keep on working on what I am focussed on right now. In traditional languages you must choose one of the above 2 options, but in Smalltalk option 2 has been done for you generally in the tools.

Saturday, May 12, 2007

Threading

Here is a post I made recently at http://lists.squeakfoundation.org/pipermail/squeak-dev/2007-February/114181.html.

Basically my summary of the state of concurrent processing today (reproduced below).

Afaik there are 3 ways of handling true concurrent execution (i.e. not green
threads):

1) Fine-grained locking/shared thread state:

The old way of running heavy weight threads, sharing memory across threads
and using some kind of locking to protect against race conditions.

Positive: Hrm, well I guess there is the most support for this, since it is
probably the most common. If you don't use any locking but only read the
data shared this is very fast approach.

Negative: It simply doesn't scale well. It also doesn't compose well. You
can't simply put two independently created pieces of code together that use
locking and expect it to work. Stated another way, fine-grained locking is
the manual memory management of concurrency methodologies [1]. If any part
of your code is doing fine-grain locking, you can never "just use it"
somewhere else. You have to dig deep down in every method to make sure you
aren't going to cause a deadlock.

This one would probably be very hard to add to Squeak based on what John
said.

2) Shared state, transactional memory:

Think relational database. You stick a series of code in an "atomic" block
and the system does what it has to to make it appear as the memory changes
occurred atomically.

Positive: This approach affords some composability. You still should know
if the methods your calling are going to operate on memory, but in the case
that you put two pieces of code together that you know will, you can just
slap an atomic block around them and it works. The system can also ensure
that nested atomic blocks work as expected to further aid composability.
This approach can often require very few changes to existing code to make it
thread safe. And finally, you can still have all (most?) of the benefits of
thread-shared memory without having to give up so much abstraction (i.e.
work at such a low level).

Negative: To continue the above analogy, I consider this one the "reference
counted memory management" of options. That is, it works as expected, but
can end up taking more resources and time in the end. My concern with this
approach is that it still does need some knowledge of what the code you are
calling does at a lower level. And most people aren't going to want to
worry about it so they will just stick "atomic" everywhere. That probably
wont hurt anything, but it forces the system to keep track of a lot more
things then it should, and this bookkeeping is not free.

This one would also require some VM (probably very big) changes to support
and could be tough to get right.

3) Share nothing message passing:

Basically, no threads, only independent processes that send messages between
each other to get work done.

Positive: This approach also allows a high level of composability. If you
get new requirements, you typically add new processes to deal with them.
And at last, you don't have to think about what the other "guy" is going to
do. A system designed in this manner is very scalable; in Erlang for
example, a message send doesn't have to worry if it is sending to a local
process or a totally different computer. A message send is a message send.
There is no locking at all in this system, so no process is sleeping waiting
for some other process to get finished with a resource it wants (low level
concerns). Instead a process will block waiting for another process to give
him work (high level concerns).

Negative: This requires a new way of architecting in the places that use
it. What we are used to is; call a function and wait for an answer. An
approach like this works best if your message senders never care about
answers. The "main loop" sends out work, the consumers consume it and
generate output that they send to other consumers (i.e. not the main loop).
In some cases, what we would normally do in a method is done in a whole
other process. Code that uses this in smalltalk will also have to take
care, as we *do* have state that could leak to local processes. We would
either have to make a big change how #fork and co. work today to ensure no
information can be shared, or we would have to take care in our coding that
we don't make changes to data that might be shared.

I think this one would be, by far, the easiest to add to Squeak (unless we
have to change #fork and co, of course). I think the same code that writes
out objects to a file could be used to serialize them over the network. The
system/package/whatever can check the recipient of a message send to decide
if it is a local call that doesn't need to be serialized or not.

[1] The big win usually cited for GC's is something to the effect of "well,
people forget to clean up after themselves and this frees up their time by
not making them". But really, the big win was composability. In any
GC-less system, it is always a nightmare of who has responsibility for
deleting what, when. You can't just use a new vendor API, you have to know
if it cleans up after itself, do I have to do it, is there some API I call?
With a GC you just forget about it, use the API and everything works.

Friday, February 2, 2007

A case of case

In all the documentation I have seen on Smalltalk, everyone always points out how to use #ifTrue:ifFalse and why it works this way. Ok, got it. But at some point, one needs more branching control then just if/else.

Now I know; anytime you branch you could accomplish the same thing using method overriding [1], but it isn't always practicle to break out a new class for a one time thing. This is the reason there is a #ifTrue:ifFalse in the first place, no doubt.

But what we can learn from #ifTrue:ifFalse is, if there is something the language doesn't provide that you need, just put it in. :) I was writting a complicated function some time back where I ran into just this case. I was having to nest if statements, but the problem was, each condition was complex. So several of the conditions had to be repeated in different if statements. Obvious code smell. "If only I Smalltalk had case!" I thought. Wait a minute, why don't I just write one? I didn't handle the general case, I just made a method that handled my specific case.

onFirstComplexCase: aFirstBlock orSecond: aSecondBlock else: aLastBlock

Of course using this method instantly made a 20 line [2], complicated and thus, unclear method into a short, very readable one. And then today, while tracking something else in the Object class I found that someone else thought of this first and did the more general solution. :)

caseOf: anAssociationCollection otherwise: aFailBlock

anAssociationCollection associationsDo: [:assoc|
assoc key value = self ifTrue: [ ^ assoc value ]
].
aFailBlock value.

Very nice. This lets us do something like:

event
caseOf: {
[ #clicked ] -> [ self handleClick ].
[ #resized ] -> [ self handleResize ].
" ... "
}
otherwise: [ self error: 'unknown event: ', event asString ]

Very cool. It looks like an ML style caseOf statement. But this still doesn't handle my case does it? My cases are much more complicated then just comparing a class to something. You have, no doubt, figured this out, but just in case; how does the case method work again? It evaluates the key and compares that result to the recieving object. So if we have a more complicated case we could just go:

true
caseOf: {
[ self hasThisTrait and: [ Date today month = 12 ] ] -> [ self goCrazy ]
" ... "
}
otherwise: [ self error: 'logic error' ]

[1] An if statement basically creates a branch in execution. This can also be done with a class hierarchy and method overloading (and in fact this is how the boolean hierarchy works in smalltalk).

[2] In Smalltalk, if your method gets over 5-10 lines it is time to start looking at some refactoring. I didn't completely believe them initially when they told me, but so far it has been the case nearly every time.

Watching the watchmen

Talk of monitoring tools today got me to thinking.

Back in my C/C++ days, efficiency was my main concern. I wanted things to go as fast as possible. So I couldn't be spending instructions to do things like logging, monitoring my own performance, etc. And my code isn't going to crash anyway, so what's the point, right? Wrong. Of course my system was perfect :), but it talked to other systems that weren't. This meant my programs just didn't seem to do anything. We were completely flying blind since there was no indication of any sort of what it was doing. That was always the main push of my manager at the team (to all of us, not just me): make the systems better at monitoring themselves.

Fast forward a few years and I am in a meeting hearing about monitoring tools that will be able to tell us things we need to know this time. This got me thinking about the nature of this beast. When you write a tool, you are putting in extra code to expose a part of your system. A different interface, or view :) then the normal. You have to, first, guess what might be a problem area so that you can write the interfaces to show it. But, as was implicitly mentioned in this meeting, you are often wrong in your guess about this. Or at least you don't expose enough.

Our systems use the popular *4log libraries to handle logging via configuration. What most programmers do with this is put a log message at the start and end of every (!!!) function. So if you turn on debugging you will literally see every single call the system makes. Is there no better way to do this?

I think there is. In smalltalk, a live system, I can view any part of the code any time I want. This is a good thing, but it was the first thing I saw as a negative when I started using smalltalk. So inefficient with so much of the system exposed! But is it? It has reflection, a way for us to use the system to ask the system things about itself. It's not some hack and it's not something you can "turn off" for production. It is part of it from top to bottom. It also has viewers that are designed to use this reflection to show us parts of the system. While it's running.

So if you think about it, they did the same thing my manager ask me to do years ago, and the same thing the developers at my company are trying to do now: they are giving ways to monitor the system. But the difference is, instead of guessing which parts of the system might have problems that we need to look at, they just gave us the whole thing.

As far as efficiency, sure, raw C++ (and even Java) may be more performant then raw smalltalk. But if end the end you wind up doing things like calling two logging functions per normal function, every time, then how much more efficient is your code? After all, Smalltalk isn't calling into the viewers for every function. Only if you use them. And you never have to go back and add new monitoring tools because you guessed wrong.

Thursday, February 1, 2007

The world from 60,000 feet

Well, I haven't gotten involved in this whole blog thing up until now but... well... why not.

This blog is probably going to mostly be about high level programming languages. I think the tide is slowly starting to shift toward higher level, more productive programming languages.

I know it has for me. I was a Perl programmer for more then half a decade, but I always hated that language. My favorite language at the time was C++. It was syntax I knew from C, but it was the most powerful language I had seen yet. You could change and add to the language! This is what drew me to it. Power.

About 6 months ago I got back into programming a bit and started looking around. A new friend had ask me to take over his web site. He is pretty talented at graphical stuff, but running a web site always winds up involving technical details that he didn't feel comfortable doing. And since I "do computers" he figured I wouldn't mind taking it.

But the fact is I always hated web programming. Stateless programming, templates, etc. I just didn't like it at all. But looking around something caught my eye: a little web framework called "Seaside".

Now the last I heard, PHP was the king of web programming and PHP was similar to Perl. Not to mention a heavy user of templates. So I wasn't interested at that at all. But this Seaside thing was talked about like it was quite the big deal, so I took a look.

They had this little counter, just a 0 with a ++ and -- under it. When you click on one of those you affect the number. At first I was pretty underwhelmed, but I clicked the button for "show source code". Now I didn't know anything at all about smalltalk, but I saw something that really grabbed my attention.

All the code did was update the object. No stuffing variables into some hidden field, writing files, none of that. Just updates and object and nothing more. I read everything on the site, just to see how they did this, so I could steal, er, use it for whatever system I made. But the more I looked, the more crazy it got.

So I downloaded Smalltalk, it wasn't that intimidating. Just download a file, click it and it runs. No installer menu filling my registry with God knows what. Just a nice little executable like 'putty' (an SSH tool).

Gradually, my love for C++ slipped away. Replaced with something new. A language that was even more powerful, but at the same time much simpler. C++ gained some of it's power allowing you to override what a given operator means for certain classes. Since smalltalk is written in itself, it goes so much further. All the control statements are written in the language, and thus accessible to the coder. Or you can write your own.

The other big thing was this concept of a "live" system. A Perl system I wrote way back when had to do some operations on a very large number of network devices. Unfortunately, at close to device 59,000 something bad happened. A bug. Rather then try to guess what was happening and do some logging, I just ran the whole system in the debugger. The system normally took 6 hours to run. In the debugger, running single threaded, it was much more then that. This meant I got 1 chance per day to try and figure out what the problem was. But with logging it would have been the same.

Finally, after about the 3rd or 4th day I found it. I don't remember the details, but I do remember that I had a fix in about 30 seconds after seeing what was going on. But the problem is, the devices behind this one had not been backed up for some time now and some people were getting more then a little excited about this. But I only had a couple of thousand devices to go, why not just start from here?

Well, because I couldn't. There was nothing I would be able to do from inside the debugger to convince the program to go on from this point, and starting again meant waiting another day. Obviously it wasn't a huge deal, or I could have written a little "one off" to touch the untouched devices (and maybe I even did, I don't remember). But it was something I wished I could have done, but something like that is just impossible. Or is it?

Not with Smalltalk (or Lisp as it turns out). In Smalltalk, I sometimes write portions of the code from the debugger, since this is the best place to get instant feedback of what the code is doing.

I will talk more about this later, perhaps. But let me just finish with, going to smalltalk has also peaked my interest in other high level languages as well. I am looking at Haskell (incredible language as well, and possibly the most terse there is) as well, and keep turning my head toward Erlang, but I haven't bit quite yet.