Saturday, May 12, 2007


Here is a post I made recently at

Basically my summary of the state of concurrent processing today (reproduced below).

Afaik there are 3 ways of handling true concurrent execution (i.e. not green

1) Fine-grained locking/shared thread state:

The old way of running heavy weight threads, sharing memory across threads
and using some kind of locking to protect against race conditions.

Positive: Hrm, well I guess there is the most support for this, since it is
probably the most common. If you don't use any locking but only read the
data shared this is very fast approach.

Negative: It simply doesn't scale well. It also doesn't compose well. You
can't simply put two independently created pieces of code together that use
locking and expect it to work. Stated another way, fine-grained locking is
the manual memory management of concurrency methodologies [1]. If any part
of your code is doing fine-grain locking, you can never "just use it"
somewhere else. You have to dig deep down in every method to make sure you
aren't going to cause a deadlock.

This one would probably be very hard to add to Squeak based on what John

2) Shared state, transactional memory:

Think relational database. You stick a series of code in an "atomic" block
and the system does what it has to to make it appear as the memory changes
occurred atomically.

Positive: This approach affords some composability. You still should know
if the methods your calling are going to operate on memory, but in the case
that you put two pieces of code together that you know will, you can just
slap an atomic block around them and it works. The system can also ensure
that nested atomic blocks work as expected to further aid composability.
This approach can often require very few changes to existing code to make it
thread safe. And finally, you can still have all (most?) of the benefits of
thread-shared memory without having to give up so much abstraction (i.e.
work at such a low level).

Negative: To continue the above analogy, I consider this one the "reference
counted memory management" of options. That is, it works as expected, but
can end up taking more resources and time in the end. My concern with this
approach is that it still does need some knowledge of what the code you are
calling does at a lower level. And most people aren't going to want to
worry about it so they will just stick "atomic" everywhere. That probably
wont hurt anything, but it forces the system to keep track of a lot more
things then it should, and this bookkeeping is not free.

This one would also require some VM (probably very big) changes to support
and could be tough to get right.

3) Share nothing message passing:

Basically, no threads, only independent processes that send messages between
each other to get work done.

Positive: This approach also allows a high level of composability. If you
get new requirements, you typically add new processes to deal with them.
And at last, you don't have to think about what the other "guy" is going to
do. A system designed in this manner is very scalable; in Erlang for
example, a message send doesn't have to worry if it is sending to a local
process or a totally different computer. A message send is a message send.
There is no locking at all in this system, so no process is sleeping waiting
for some other process to get finished with a resource it wants (low level
concerns). Instead a process will block waiting for another process to give
him work (high level concerns).

Negative: This requires a new way of architecting in the places that use
it. What we are used to is; call a function and wait for an answer. An
approach like this works best if your message senders never care about
answers. The "main loop" sends out work, the consumers consume it and
generate output that they send to other consumers (i.e. not the main loop).
In some cases, what we would normally do in a method is done in a whole
other process. Code that uses this in smalltalk will also have to take
care, as we *do* have state that could leak to local processes. We would
either have to make a big change how #fork and co. work today to ensure no
information can be shared, or we would have to take care in our coding that
we don't make changes to data that might be shared.

I think this one would be, by far, the easiest to add to Squeak (unless we
have to change #fork and co, of course). I think the same code that writes
out objects to a file could be used to serialize them over the network. The
system/package/whatever can check the recipient of a message send to decide
if it is a local call that doesn't need to be serialized or not.

[1] The big win usually cited for GC's is something to the effect of "well,
people forget to clean up after themselves and this frees up their time by
not making them". But really, the big win was composability. In any
GC-less system, it is always a nightmare of who has responsibility for
deleting what, when. You can't just use a new vendor API, you have to know
if it cleans up after itself, do I have to do it, is there some API I call?
With a GC you just forget about it, use the API and everything works.


Anonymous said...

Howdy i am new here, I came upon this chat board I find It extremely helpful & it has helped me out so much. I hope to give something back & assist other people like its helped me.

Thanks, Catch You Later.

Anonymous said...

[i]keep up the good work[/i]