Functional Thinking in Clojure: Part 2

Welcome to part 2 of my Functional Thinking in Clojure series. Today we’ll discuss how to optimize the Number Classifier we implemented last time like the original Java implementation was, and how to do closures in Clojure.

The Number Classifier (Optimized and Optimizable)

If you remember, the requirements for the Number Classifier are as follows:

… given any positive integer greater than 1, you must classify it as either perfect, abundant, or deficient. A perfect number is a number whose factors (excluding the number itself as a factor) add up to the number. Similarly, an abundant number’s sum of factors is greater than the number, and a deficient number’s sum of factors is less.

Our original implementation looked like this:

For a full discussion of my reasoning behind that implementation, see part 1 of this series. For now, all I’ll point out is that it’s a naive and terribly slow implementation for one very good reason.

The implementation of aliquot-sum relies on finding the proper factors of a given number. The implementation of that function in turn relies on finding all of the factors of a number and then filtering them. That means that I have to check every number from 1 to number and see if it’s a factor.

It’s correct, but man alive is it slow.

So let’s correct that.

Neal points out in part 2 of his series (and mentions in part 1) that there is a very simple optimization that involves realizing that if you have the factors of the square root of a number, you can quickly find the rest of the factors by dividing the number by each of those factors in turn.

So what we’ll do is open up our implementation to our users and give them the ability to provide their own implementation of our logic.

What have we done?

  1. I added a comment form at the bottom of core.clj that contains calls to time that can be executed from Emacs using C-x C-e if you’re using clojure-jack-in. They’re a quick and easy way to time how long a given call to classify takes to execute. This gives us the ability to see if our optimizations are actually making a difference.

  2. As you can see, if you have a particular part of your algorithm that you think might be useful to swap a different implementation in for, you just provide an argument for that part. Functions are first class citizens in Clojure so we just pass the name of the algorithm part in to our function and it get’s used.

  3. I use the idiom of arity overloading to provide defaults for arguments all the time in my own code. This allows us to call aliquot-sum naked and get the un-optimized version but at the same time provide a different function if we so choose. Obviously, by default you would want the optimized version but I left it this way for clarity.

  4. aliquot-sum-optimized utilizes a very common idiom in Clojure code of naming steps in a programming using let binding. Remember, the core optimization is to only explicitly check the numbers from 1 to the square root and then utilize the results of that to get the other factors. This is easy to do by naming each step and then combining them later.

  5. I also demonstrate here just how easy it is to use Java functions (read ‘static methods’) from Clojure. The Java Math class implements all sorts of goodness for mathematical operations and they implement it all as static methods. We love static methods because static methods are (or should be!) functional and thus we can call them directly by pretending that the class is a namespace.

  6. Notice that because the base abstraction for our system is a function, I can easily reuse the same function for finding factors that I used with the proper-factors implementation. I didn’t wrap that up in a class that I would then have to reference elsewhere, so I just get that logic for free.

That’s enough for that.

Now we move on to the ugliness of state…

make-counter With A Closure

I’ll admit outright that I don’t get closures. I think I understand them technically: You have a block of code that has implicit local bindings to the variables in it’s containing scope so that it can change it’s reference to them without changing other blocks with their own references to them.

I also have used them to great effect in Gradle et al and so I get their usefulness in terms of library and framework construction. But maybe I just don’t get the big deal of them. What’s the big deal that’s a bigger deal than first-class functions? I don’t know.

But that won’t stop me from showing how I’d implement it in Clojure!

  1. Yuk! Look at that state! I have no tests written for this guy because it’s hard to test functions that rely on state.

  2. make-counter relies on a let-bound atom named incremented which is set to 0 at the start. Every anonymous function returned from make-counter gets their own locally-bound reference to incremented.

  3. The defn form is actually a macro that does nothing but create a different form (def function-name (fn [args])). That’s why I can (def c1 (make-counter)) and then call c1 as if it’s a function. It is a function because make-counter returns a function and then gets assigned to the name c1.

  4. atom is one of the concurrency primitives provided by Clojure. One of Clojure on the JVM’s killer features is their STM and explicit state management logic. Everything is a value in Clojure by default. If you need state, you need to use one of the concurrency primitives which then force you to utilizes transactions to update their value. It’s overall quite a brilliant system but is well beyond the scope of this post to document. For now, if you use atoms, you just need to be aware that the way to update their state is with swap!. Read up if you’re interested!

Hopefully you’ve gleaned some useful tips from this post. Stay tuned till next time when I cover the code samples provided in the third article of Neal Ford’s Functional Thinking series.