Henry David Thoreau was a content creator active in the 1850s.
Henry David Thoreau was a content creator active in the 1850s.
Michael Fogus is right (as usual) about this one. Someone with Odersky's stature should not fear lack of humilty but instead lack of confidence.
Way back in May 2012, Ambrose was beginning work on Typed Clojure for Google Summer of Code. I interviewed him to get an idea of what kind of type system we could expect. Since he's now running a very successful crowd funding campaign, I thought I'd bring up this blast from the past.
A Lisp with a macro system is actually two languages in a stack. The bottom language is the macro-less target language (which I'll call the Lambda language). It includes everything that can be interpreted or compiled directly.
The Macro language is a superset of the Lambda language. It has its own semantics, which is that Macro language code is recursively expanded into code of the Lambda language.
Why isn't this obvious at first glance? My take on it is that because the syntax of both languages is the same and the output of the Macro language is Lambda language code (instead of machine code), it is easy to see the Macro language as a feature of the Lisp. Macros in Lisp are stored in the dynamic environment (in a way similar to functions), are compiled just like functions in the Lisp language (also written in the Macro language) which makes it even easier to confuse the layers. It seems like a phase in some greater language which is the amalgam the two.
However, it is very useful to see these as two languages in a stack. For one, realizing that macroexpansion is an interpreter (called
macroexpand) means that we can apply all of our experience of programming language design to this language. What useful additions can be added? Also, it makes clear why macros typically are not first-class values in Lisps: they are not part of the Lambda language, which is the one in which values are defined.
The separation of these two languages reveals another subtlety: that the macro language is at once an interpreter and a compiler. The semantics of the Macro language are defined to always output Lambda language, whereas the Lambda language is defined as an interpreter (as in McCarthy's original Lisp paper) and the compiler is an optimization. We can say that the Macro language has translation semantics.
But what if we define a stack that only allows languages whose semantics are simply translation semantics? That is, at the bottom there is a language whose semantics define what machine code it translates to. We would never need to explicitly write a compiler for that language (it would be equivalent to the interpreter). This is what I am exploring now.
When I was training as a teacher, I gave a simple quiz with True/False (T/F) questions. The results were terrible. Worse than chance. On one question, about 20% of the class got it right.
I had asked a simple question involving a logical AND:
True or False?
A parallelogram has parallel opposite sides AND it has five sides.
Eighty percent of the class chose 'True', even though all parallelograms have four sides. The other teachers told me the question was difficult because it was a T/F question. They said they never give T/F questions because they only confuse the kids. They said I should just forget about T/F and try a different type of question. But it was my class and my time to explore teaching and I knew that this question was not that hard. Several connections became clear in my mind: using the right part of their brains, making the problem about people, and using their imaginations effectively. I wanted to give it a shot.
I planned the next class around answering True/False questions. There would be an experiment to confirm my suspicion (that the kids were using the wrong part of their brains), a lesson using an imaginative process, and then a similar quiz to see how it worked.
The next morning in class, I wrote the T/F question on the blackboard and called a student up to answer it. He read it and said 'True' (the wrong answer). I asked him "what about this part?", pointing to the false part. He was clearly confused. The part about five sides was obviously false to him. He then began looking around1 through the question and stopped at the first part (the true part). He pointed at it and said 'True', as if it negated the fact that the other part was false. It's hard to describe, but I was convinced that he was simply looking for something that was true to make the whole question true. And he thought that it was the right answer. My hypothesis was confirmed: he was using a visual strategy when it was not called for.
I then demonstrated an imagination process for solving True/False questions. It went like this:
When solving a True/False question, I first imagine someone standing in front of me. He says the statement from the question to me. If he is lying, the answer is False. If he is telling the truth, the answer is True.
I asked a couple of people to carry out the process while narrating it to me. They seemed to be able to do it (and they got it right). So then I gave the quiz.
The result? Correct answers went from 20% to 80%. I felt like I was finally testing their knowledge of the material and not their understanding of test-taking strategies.
How did it work? By converting the problem from a logic skill to a social skill, the students could totally bypass the need to process difficult symbolic rules. And we could solve it as a social problem by using a structured process of imagination.
True/False questions are difficult because there are so many levels of binary confusion. First, you are looking for the correct (as opposed to the incorrect) answer. Then you must determine the truth value of the whole statement, which is a function of the truth values of the sub-statements. It's just a lot of levels to keep in your head.
The imaginative process cuts through all of that and asks one question: is he lying. You are offloading the processing to the social part of your brain, which can easily do it if framed in the right way.2
In my last post, I hinted at a better way to teach how to determine whether a function is a pure function. The better way is to imagine a robot in front of you. Can he run that function "in his head"? Or does he need to effect the outside world?↩
I always hated in school when the teacher would instruct us to use our imaginations to solve problems. Well, hmm. Especially as a kid, my imagination was filled with magic. Not very useful. Not that you shouldn't use your imagination. You should. But as an instruction, I find it lacking. The teachers were just being lazy.
See, the thing I learned through a lot of experiments and reading is that our imagination is very powerful if you use it correctly. But much as our computers can run any possible program, our minds can imagine any situation. The key to using our imagination effectively is to learn to harness the infinite potential and directing it to a purpose.
One set of techniques I studied was called Neuro Linguistic Programming. It is thoroughly interesting, especially the earliest stuff, but avoid the cultish seminars. The early stuff was based on Cybernetics and linguistics (specifically Noam Chomsky's transformational grammar). It was quite rigorous as far as informal studies go. It has since become a new-age movement. Tread carefully if you wish to explore it.
One of the most intriguing aspects of NLP is that it teaches you to discover the structure of a mental process through introspection in terms of the raw sensory experience. That is, what do you see, hear, feel, etc. You break it down into the smallest steps that can be measured. You can then understand your own process and give yourself more control over your own apparatus. It sounds like a structured use of imagination. By deconstructing your imagination and guiding another person through the steps, you can transfer the outline of a skill to someone else.
I know this must sound like magic. But don't we expect teachers to pass skills on to others? Teachers routinely break down skills to teach them in a process known as task analysis. You are simply doing it to your imagination. It's not magic. It's not instant knowledge transfer. Practice and experience still count for a lot. But it can get you pretty far.
If it still sounds like magic, all I'm saying is that you switch from A to B:
A: Pure functions are functions that don't have any side-effects. Use your imagination.
B: To determine if a function is a pure function, look through the function line-by-line. On each line, imagine a green checkmark if the line has no side-effects. Imagine a red X if it has side-effects. When all lines have been marked, if you have any red X's, your function is not pure.
While correct, the first explanation gives very little help to your imagination. In fact, my first response to explanations of type A is to think about what it might mean. But I can use my imagination with effort. When I do that, I realize that I am imagining visually scanning the function (in my mind's eye) and marking lines with side-effects.1 The second one asks you to imagine just that. Asking someone to go through a process makes it clearer.2
NLP does have a lot of merit, especially when it comes to teaching. Our mental processes can be introspected, analyzed, and described. And we can then guide a student through a process similar to what goes on in our heads when we solve the problem. This is the last piece I need to tell the story of one of the most successful things I have ever taught. Next time.
This is a slight lie. I don't use visual checkmarks. I actually feel different when I see an expression with a side-effect. The feeling infects the whole function it is in once I feel it. My feelings have modeled the scoping rules of the language. This is the ultimate goal of imagination and skill development--when your simulation is accurate enough to be relied on.↩
There is another improvement which I'll get to in the very next post. It will pull together the last few teaching posts.↩
Null pointers are considered by their inventor to be a huge mistake. Clojure inherits its null pointer, called
nil in Clojure, from the JVM. In contrast to Java1, Clojure seems to embrace the null pointer. In this post, I'd like to explore how Clojure uses the null pointer in what is often called nil-punning.
Nil-punning has its roots in the very first Lisps, where
nil was both false (the boolean value) and the end of a list (the empty list). It was also often used to represent "no answer", as in what is the first element of an empty list. It is called punning because you can use it to mean different things in different contexts.2
nil, as a value, is nearly void of meaning. And it is all pervasive, because it can be returned from any Clojure function or Java method.
Let's go through that last part bit by bit.
nulla lack of object even though it was pointed to by an object pointer. You can't call methods on it. It is not an object. It has a weird nameless type. Clojure did not make this mistake. It is a first-class value and type3, meaning it can be compared to other values, it can implement protocols, and be used as the key or value of a map, etc. Using
nilwhere it doesn't make sense in Clojure code is usually a type error, not a NullPointerException, just as using a number as a function is a type error.
nilcan become an asset instead of a liability. Clojure takes nil-punning to an extreme.
nil can be many things. To name but a few,
false as a boolean. It plays the empty
seq as a
seq4. It plays the empty map as a map. Because
nil has a role to play in most of the major abstractions of core Clojure, it rarely leads you into an error situation. An unexpected nil can surprise a good programmer, just as much as an unexpected
Nothing from a Haskell function can bewilder even the most experienced Haskeller. 5 Finding out where a
nil came from is the hardest problem that
nils are normal parts of Clojure programs. They are not anomalous as in Java, where you often have to check it everywhere. This means it is always on the experienced programmer's mind.
nils flow like water through s-expressions.
first has nothing to return if the seq is empty, and so it returns
nil is a seq,
first works on
nil, and returns
rest works on
nil as well, because
nil is a seq.
These examples show the best of nil-punning. When nil-punning works right,
nils are expected and they give the expected results.
nil is everywhere, but it can be used mostly everywhere as well--without error and often with exactly the desired result. There are many abstractions that
nil does not participate in (for instance IFn, which is Clojure's interface for things that can be called like functions). In these places,
nil can present a problem--a problem of type, the same as if you tried to call a number as a function.
The best thing to do, in my experience, is simply to wrap the expression in a
(when ) to catch the
nil cases, if appropriate, while also preserving it. Otherwise, perhaps letting the Exception bubble up is the best answer. If you got a
nil where you couldn't use one, the stack trace is probably your best clue to where it came from.
After a bit of experience with Clojure, I rarely have difficult problems with out-of-place
nils in pure Clojure code. However, there is often some Java interop--namely, calling Java methods directly--that will cause a NullPointerException if the object of the method call is
nil. In these cases, wrapping a Java method call in a
(when ) is often appropriate. But sometimes not, and the NullPointerException is welcome.
There are some decisions in Clojure that I think make poor use of nil-punning. These places actually make working with Clojure more difficult than they need to be. For instance,
(str nil) is the empty string. Printing this out prints nothing--a form of silence, which is rarely what you want so you have to check for
nils in these cases. But
nil is not the empty string, like it is the empty
(clojure.string/trim nil) throws a NullPointerException. This is inconsistent behavior. When
nil acts inconsistently, nil-punning does not work right.
nils need to be checked. In the worst cases,
nils fail silently. While I have learned to deal with these situations, they are a wart on the language. The fact that
nils are so common does help surface the bugs sooner. A small consolation.
Let me make it clear: null pointers are still a costly problem in Clojure. But I can make a claim similar to what Haskellers claim about the type system: nil-punning eliminates a certain class of errors. A fortuitous set of decisions in Clojure has reduced the magnitude of the problem. And some decisions have made the problem worse by hiding it. In general, I find that by embracing nil-punning, my code gets better.
I don't mean to pick on Java alone. I just wanted to be specific.↩
The type of
Clojure's core is built on several small, powerful abstractions. The most prominent abstraction is seq, which stands for sequence. seq basically has two operations,
rest. The most obvious use for them is to iterate through items of a collection. There are built-in implementations for lists, vectors, sets, hashmaps, and even strings. But anything that has a notion of sequential values can implement seq, including Java Iterators. I would also like to posit that the most important and often overlooked implementation of
seq is for
Even the best Haskellers complain about not knowing where a Nothing came from.↩
You might look at this as nil-preserving behavior--much like the Nothing-preserving behavior of the Maybe Monad↩
Let's look at this logic problem:
Try to determine the answer.
Hint: It's presented visually to try to trick you (as we saw last time).
The answer is two cards, the A and the 3.
Try this one:
You are a police officer enforcing this law:
It is only legal for minors to drink non-alcoholic beverages.
You are busy, so you need to quickly assess each bar with the minimum amount of checks.
You walk into a bar, and you see this scene:
There are four people in the bar. One is a teenage boy, but you can't see his drink. One is an old man, and you can't see his drink. The third person has a coke1, but he/she is behind a column so you can't see their age. The last person has a big pint of beer, but he/she is also blocked by a column.
Did you get the answer? You need to check two people. You don't need to check the old man, he's obviously not a minor. You don't need to check the person with a coke, even though he might be underage. That leaves the other two. The teenage boy on the left might have an alcoholic drink (which you can't see), and the person you can't see has a beer, so he/she might be a minor.
Was that easier? Would it surprise you if I told you that the card problem and the bar problem are equivalent?2 Why is it easier to solve the bar problem with almost no effort?
There are two reasons: one, it calls on years of built-up, real-world experience. Second, the problem is social--that is, it is about people. We humans are built to handle complex relationships between people. Our reasoning power is somehow magnified and clarified when phrased in terms of people in a familiar situation.
Moral: When possible present a logic problem as a problem people have no difficulty solving.
What's interesting is that you instantly know the shape of the solution when presented with the bar version of the problem. You think "I'm looking for minors and alcoholic beverages." Whereas my first reaction to the card version was "I'm looking for vowels and even numbers" (which is wrong). Only after careful, slow, deliberate thinking was I able to see the I should be looking for vowels and odds.
If you present material in the right way, it will help you teach the material better. You've likely had this experience before. Did your teacher ever do a math problem in terms of buying something and making change? Somehow, kids who score poorly in math class can still do the same problem when it's presented as a human to human exchange!
Social problems are not the only ones that we are hyper-capable of solving. There are also other situations that we are also hardwired to understand better than symbolic puzzles. Spatial orientation (for instance, that the arm sticking out from behind the column is attached to a person) and movement are also easy to solve, and luckily some of the most interesting math problems are equivalent to orientation and motion.
By converting a symbolic problem to one that is a familiar, real-world situation, you are tapping into many different parts of the brain. The key to a good "conversion" is whether the problem solver can properly simulate the situation themselves. The bar problem is good because it's something we can all imagine.
This is one thing I try to take advantage of in my videos. Yes, you are learning Clojure to solve a very complex problem. However, the problem is familiar to most, as it involves many metaphors and simulated situations. You are teaching someone to bake. A function is like a recipe. Pure functions are like doing a calculation in your head. Side effects move you around or use up ingredients, etc. I worked hard to make it seamless to learn.
Now that we know that it's easier to learn something if we can already simulate it, the next question is how to convert math/logic problems from their symbolic form into something more suitable for whole-brain simulation. This post is long enough already, so I'll address that next time.
I am pretty excited about this project. Please check it out.
There are other languages with healthier communities, more momentum, cleaner cores, and features on par with CL. So I have to ask myself, why bother with CL?
I had a similar experience with Common Lisp. Great language, snobby community, little progress, needs more modern amenities.