Apr 2011

Immutability and Blocks, Lambdas and Closures [UPDATE x2]

I recently ran into some “interesting” behaviour when using lambda in Python. Perhaps it’s only interesting because I learnt lambdas from a functional language background, where I expect that they work a particular way, and the rest of the world that learnt lambdas through Python, Ruby or JavaScript disagree. (Shouts out to you Objective-C developers who are discovering the wonders of blocks, too.) Nevertheless, I thought this would be blog-worthy. Here’s some Python code that shows the behaviour that I found on Stack Overflow:

Since I succumb to reading source code in blog posts by interpreting them as “blah”1, a high-level overview of what that code does is:

  1. iterate over a list of strings,
  2. create a new list of functions that prints out the strings, and then
  3. call those functions, which prints the strings.

Simple, eh? Prints “do”, then “re”, then “mi”, eh? Wrong. It prints out “mi”, then “mi”, then “mi”. Ka-what?

(I’d like to stress that this isn’t a theoretical example. I hit this problem in production code, and boy, it was lots of fun to debug. I hit the solution right away thanks to the wonders of Google and Stack Overflow, but it took me a long time to figure out that something was going wrong at that particular point in the code, and not somewhere else in my logic.)

The second answer to the Stack Overflow question is the clearest exposition of the problem, with a rather clever solution too. I won’t repeat it here since you all know how to follow links. However, while that answer explains the problem, there’s a deeper issue. The inconceivable Manuel Chakravarty provides a far more insightful answer when I emailed him to express my surprise at Python’s lambda semantics:

This is a very awkward semantics for lambdas. It is also probably almost impossible to have a reasonable semantics for lambdas in a language, such as Python.

The behaviour that the person on SO, and I guess you, found surprising is that the contents of the free variables of the lambdas body could change between the point in time where the closure for the lambda was created and when that closure was finally executed. The obvious solution is to put a copy of the value of the variable (instead of a pointer to the original variable) into the closure.

But how about a lambda where a free variable refers to a 100MB object graph? Do you want that to be deep copied by default? If not, you can get the same problem again.

So, the real issue here is the interaction between mutable storage and closures. Sometimes you want the free variables to be copied (so you get their value at closure creation time) and sometimes you don’t want them copied (so you get their value at closure execution time or simply because the value is big and you don’t want to copy it).

And, indeed, since I love being categorised as a massive Apple fanboy, I found the same surprising behaviour with Apple’s blocks semantics in C, too:

You can see the Gist page for this sample code to see how to work around the problem in Objective-C (basically: copy the block), and also to see what it’d look like in Haskell (with the correct behaviour).

In Python, the Stack Overflow solution that I linked to has an extremely concise way of giving the programmer the option to either copy the value or simply maintain a reference to it, and the syntax is clear enough—once you understand what on Earth what the problem is, that is. I don’t understand Ruby or JavaScript well enough to comment on how they’d capture the immediate value for lambdas or whether they considered this design problem. C++0x will, unsurprisingly, give programmers full control over lambda behaviour that will no doubt confuse the hell out of people. (See the C++0x language draft, section 5.1.2 on page 91.)

In his usual incredibly didactic manner, Manuel then went on to explain something else insightful:

I believe there is a deeper issue here. Copying features of FP languages is the hip thing in language design these days. That’s fine, but many of the high-powered FP language features derive their convenience from being unspecific, or at least unconventional, about the execution time of a piece of code. Lambdas delay code execution, laziness provides demand-dependent code execution plus memoisation, continuations capture closures including their environment (ie, the stack), etc. Another instance of that problem was highlighted by Joe Duffy in his STM retrospective.

I would say, mutability and flexible control flow are fundamentally at odds in language design.

Indeed, I’ve been doing some language exploration again lately as the lack of static typing in Python is really beginning to bug me, and almost all the modern languages that attempt to pull functional programming concepts into object-oriented land seem like a complete Frankenstein, partially due to mutability. Language designers, please, this is 2011: multicore computing is the norm now, whether we like it or not. If you’re going to make an imperative language—and that includes all your OO languages—I’ll paraphrase Tim Sweeney: in a concurrent world, mutable is the wrong default! I’d love a C++ or Objective-C where all variables are const by default.

One take-away point from all this is to try to keep your language semantics simple. I love Dan Ingall’s quote from Design Principles Behind Smalltalk: “if a system is to serve the creative spirit, it must be entirely comprehensible to a single individual”. I love Objective-C partially because its message-passing semantics are straightforward, and its runtime has a amazingly compact API and implementation considering how powerful it is. I’ve been using Python for a while now, and I still don’t really know the truly nitty-gritty details about subtle object behaviours (e.g. class variables, multiple inheritance). And I mostly agree with Guido’s assertion that Python should not have included lambda nor reduce, given what Python’s goals are. After discovering this quirk about them, I’m still using the lambda in production code because the code savings does justify the complexity, but you bet your ass there’s a big comment there saying “warning, pretentous code trickery be here!”

1. See point 13 of Knuth et al.’s Mathematical Writing report.

UPDATE: There’s a lot more subtlety at play here than I first realised, and a couple of statements I’ve made above are incorrect. Please see the comments if you want to really figure out what’s going on: I’d summarise the issues, but the interaction between various language semantics are extremely subtle and I fear I’d simply get it wrong again. Thank you to all the commenters for both correcting me and adding a lot of value to this post. (I like this Internet thing! Other people do my work for me!)

Update #2

I’ve been overwhelmed by the comments, in both the workload sense and in the pleasantly-surprised-that-this-provoked-some-discussion sense. Boy, did I get skooled in a couple of areas. I’ve had a couple of requests to try to summarise the issues here, so I’ll do my best to do so.

Retrospective: Python

It’s clear that my misunderstanding of Python’s scoping/namespace rules is the underlying cause of the problem: in Python, variables declared in for/while/if statements will be declared in the compound block’s existing scope, and not create a new scope. So in my example above, using a lambda inside the for loop creates a closure that references the variable m, but m’s value has changed by the end of the for loop to “mi”, which is why it prints “mi, mi, mi”. I’d prefer to link to the official Python documentation about this here rather than writing my own words that may be incorrect, but I can’t actually find anything in the official documentation that authoritatively defines this. I can find a lot of blog posts warning about it—just Google for “Python for while if scoping” to see a few—and I’ve perused the entire chapter on Python’s compound statements, but I just can’t find it. Please let me know in the comments if you do find a link, in which case I’ll retract half this paragraph and stand corrected, and also a little shorter.

I stand by my assertion that Python’s for/while/if scoping is slightly surprising, and for some particular scenarios—like this—it can cause some problems that are very difficult to debug. You may call me a dumbass for bringing assumptions about one language to another, and I will accept my dumbassery award. I will happily admit that this semantics has advantages, such as being able to access the last value assigned in a for loop, or not requiring definitions of variables before executing an if statement that assigns to those variables and using it later in the same scope. All language design decisions have advantages and disadvantages, and I respect Python’s choice here. However, I’ve been using Python for a few years, consider myself to be at least a somewhat competent programmer, and only just found out about this behaviour. I’m surprised 90% of my code actually works as intended given these semantics. In my defence, this behaviour was not mentioned at all in the excellent Python tutorials, and, as mentioned above, I can’t a reference for it in the official Python documentation. I’d expect that this behaviour is enough of a difference vs other languages to at least be mentioned. You may disagree with me and regard this as a minor issue that only shows up when you do crazy foo like use lambda inside a for loop, in which case I’ll shrug my shoulders and go drink another beer.

I’d be interested to see if anyone can come up an equivalent for the “Closures and lexical closures” example at http://c2.com/cgi/wiki?ScopeAndClosures, given another Python scoping rule that assignment to a variable automatically makes it a local variable. (Thus, the necessity for Python’s global keyword.) I’m guessing that you can create the createAdder closure example there with Python’s lambdas, but my brain is pretty bugged out today so I can’t find an equivalent for it right now. You can simply write a callable class to do that and instantiate an object, of course, which I do think is about 1000x clearer. There’s no point using closures when the culture understands objects a ton better, and the resulting code is more maintainable.

Python summary: understand how scoping in for/while/if blocks work, otherwise you’ll run into problems that can cost you hours, and get skooled publicly on the Internet for all your comrades to laugh at. Even with all the language design decisions that I consider weird, I still respect and like Python, and I feel that Guido’s answer to the stuff I was attempting would be “don’t do that”. Writing a callable class in Python is far less tricky than using closures, because a billion times more people understand their semantics. It’s always a design question of whether the extra trickiness is more maintainable or not.

Retrospective: Blocks in C

My C code with blocks failed for a completely different reason unrelated to the Python version, and this was a total beginner’s error with blocks, for which I’m slightly embarrassed. The block was being stack-allocated, so upon exit of the for loop that assigns the function list, the pointers to the blocks are effectively invalid. I was a little unlucky that the program didn’t crash. The correct solution is to perform a Block_copy, in which case things work as expected.

Retrospective: Closures

Not all closures are the same; or, rather, closures are closures, but their semantics can differ from language to language due to many different language design decisions—such as how one chooses to define the lexical environment. Wikipedia’s article on closures has an excellent section on differences in closure semantics.

Retrospective: Mutability

I stand by all my assertions about mutability. This is where the Haskell tribe will nod their collective heads, and all the anti-Haskell tribes think I’m an idiot. Look, I use a lot of languages, and I love and hate many things about each of them, Haskell included. I fought against Haskell for years and hated it until I finally realised that one of its massive benefits is that things bloody well work an unbelievable amount of the time once your code compiles. Don’t underestimate how much of a revelation this is, because that’s the point where the language’s beauty, elegance and crazy type system fade into the background and, for the first time, you see one gigantic pragmatic advantage of Haskell.

One of the things that Haskell does to achieve this is the severe restriction on making things immutable. Apart from the lovely checkbox reason that you can write concurrent-safe algorithms with far less fear, I truly believe that this makes for generally more maintainable code. You can read code and think once about what value a variable holds, rather than keep it in the back of your mind all the time. The human mind is better at keeping track of multiple names, rather than a single name with different states.

The interaction of state and control flow is perhaps the most complex thing to reason about in programming—think concurrency, re-entrancy, disruptive control flow such as longjmp, exceptions, co-routines—and mutability complicates that by an order of magnitude. The subtle difference in behaviour between all the languages discussed in the comments is exemplar that “well-understood” concepts such as lexical scoping, for loops and closures can produce a result that many people still don’t expect; at least for this simple example, these issues would have been avoided altogether if mutability was disallowed. Of course mutability has its place. I’m just advocating that we should restrict it where possible, and at least a smattering of other languages—and hopefully everyone who has to deal with thread-safe code—agrees with me.

Closing

I’d truly like to thank everyone who added their voice and spent the time to comment on this post. It’s been highly enlightening, humbling, and has renewed my interest in discussing programming languages again after a long time away from it. And hey, I’m blogging again. (Though perhaps after this post, you may not think that two of those things are good things.) It’s always nice when you learn something new, which I wouldn’t have if not for the excellent peer review. Science: it works, bitches!

Comments