Heroku is a great service, especially for the lone developer who wants free hosting for an app. The free hosting works just like the paid service except that your server will be "spun down", meaning that after five minutes of no activity, your server is stopped. It will be "spun up" again when there is a request. This spin up process can take a while and certainly does not give a good user experience.
Luckily there is a recommended way of avoiding this delay. The solution is to run New Relic monitoring, which periodically polls your server, avoiding the five minutes of no activity and hence keeping your server running.
In addition to keeping your server from falling asleep, it also gathers lots of profiling information that could help you understand your server as it runs in production. Luckily, both Heroku and New Relic offer free tiers. This brings us to my favorite financial formula:
Free + Free = Free
You can also use this process with the paid versions. Note also that Heroku itself recommends setting this up, so don't feel guilty!
I am assuming you are already using Heroku and have a working app hosted there, using an uberjar deploy. You also have the Heroku Toolbelt installed.
1. Add the New Relic add-on to your app.
$ heroku addons:add newrelic:stark
I recommend the
stark plan for free apps. You can choose any of the New Relic tiers.
2. Find and download the latest New Relic Java API release.
This download page lists all of the versions. Find the latest one and download the ZIP file.
3. Unzip the ZIP file into the base of your app.
$ cd projects/my-app
$ unzip ~/Downloads/newrelic-java-3.2.0.zip
It should unzip into a
4. Check your
.gitignore file for
New Relic includes its own JAR file which needs to be deployed with your app in the git repo. My
.gitignore included a line
*.jar which would exclude all JAR files. Remove this line if you see it.
5. Add the .gitignore and the
newrelic/ directory to your repo.
$ git add .gitignore
$ git add newrelic
$ git commit -m "Add New Relic monitoring agent."
Make sure the file
newrelic/newrelic.jar was added.
6. Release to Heroku.
$ heroku release
7. Configure your app to use New Relic.
We need to add a new JVM option. There is an environment variable called
JVM_OPTS which is typically used to do this. Find out what value it has now.
$ heroku config
Find the line starting with
JVM_OPTS:. Mine says "
-Xmx400m". Now we add this to the variable: "
$ heroku config:set JVM_OPTS="-Xmx400m -javaagent:newrelic/newrelic.jar"
The app should restart with the new options. Visit your Heroku dashboard, find your app, and click on the New Relic addon to see the New Relic Dashboard for your app. The first load might take some time, but subsequent loads will be at full speed and it won't spin down. load!
Nice validation library plus very good motivational blog post.
Clay Shirky nails it with nice, narrative style.
It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures. -- Alan Perlis
In his essay titled "Dynamic languages are static languages", Robert Harper states:
the so-called untyped (that is "dynamically typed") languages are, in fact, unityped. Rather than have a variety of types from which to choose, there is but one!
The notion that a language with no types also has one type is absurd, so we must proceed and allow the perspective that a language with dynamic typing actually has a single static type. The argument is very rhetorical, but it does seem to be a valuable perspective. However, I disagree with the final conclusion:
And this is precisely what is wrong with dynamically typed languages: rather than affording the freedom to ignore types, they instead impose the bondage of restricting attention to a single type! Every single value has to be a value of that type, you have no choice!
I believe that a well-chosen type is freeing.
In Smalltalk, "everything is an object". This statement is one of the first things you might hear about the language--from people who use it and like it. What is that statement but a declaration of having one type? That one type, with its single operation (send message), is what made Smalltalk so powerful.
Smalltalk's power comes from adhering to a small, firm discipline that others also adhere to. Much like the laws of a government might be binding in a certain sense, in another they give rise to the freedom and privilege citizens enjoy in society. The prohibition of murder allows us the freedom to move about without fear of death. The rules governing market transactions ensure a minimum of fairness. Everyone benefits.
In the same way, Smalltalk's message passing presents a very small point of agreement in order to participate in the bounty of the runtime. Powerful tools, services, and metaprogramming facilities were available for the small price of conforming to a very simple type.
Far from being restrictive, a single type is freeing. The rhetoric of "bondage" does not hold, since there is an existential proof of a single type being the source of freedom and power. I wonder if a paucity of perspectives is part of the reason such an obvious flaw can arise in an intelligent essay. There are no sides in the search for better tools. We will want to combine the powerful elements from as many different perspectives as possible.
Henry David Thoreau was a content creator active in the 1850s.
Michael Fogus is right (as usual) about this one. Someone with Odersky's stature should not fear lack of humilty but instead lack of confidence.
Way back in May 2012, Ambrose was beginning work on Typed Clojure for Google Summer of Code. I interviewed him to get an idea of what kind of type system we could expect. Since he's now running a very successful crowd funding campaign, I thought I'd bring up this blast from the past.
A Lisp with a macro system is actually two languages in a stack. The bottom language is the macro-less target language (which I'll call the Lambda language). It includes everything that can be interpreted or compiled directly.
The Macro language is a superset of the Lambda language. It has its own semantics, which is that Macro language code is recursively expanded into code of the Lambda language.
Why isn't this obvious at first glance? My take on it is that because the syntax of both languages is the same and the output of the Macro language is Lambda language code (instead of machine code), it is easy to see the Macro language as a feature of the Lisp. Macros in Lisp are stored in the dynamic environment (in a way similar to functions), are compiled just like functions in the Lisp language (also written in the Macro language) which makes it even easier to confuse the layers. It seems like a phase in some greater language which is the amalgam the two.
However, it is very useful to see these as two languages in a stack. For one, realizing that macroexpansion is an interpreter (called
macroexpand) means that we can apply all of our experience of programming language design to this language. What useful additions can be added? Also, it makes clear why macros typically are not first-class values in Lisps: they are not part of the Lambda language, which is the one in which values are defined.
The separation of these two languages reveals another subtlety: that the macro language is at once an interpreter and a compiler. The semantics of the Macro language are defined to always output Lambda language, whereas the Lambda language is defined as an interpreter (as in McCarthy's original Lisp paper) and the compiler is an optimization. We can say that the Macro language has translation semantics.
But what if we define a stack that only allows languages whose semantics are simply translation semantics? That is, at the bottom there is a language whose semantics define what machine code it translates to. We would never need to explicitly write a compiler for that language (it would be equivalent to the interpreter). This is what I am exploring now.
When I was training as a teacher, I gave a simple quiz with True/False (T/F) questions. The results were terrible. Worse than chance. On one question, about 20% of the class got it right.
I had asked a simple question involving a logical AND:
True or False?
A parallelogram has parallel opposite sides AND it has five sides.
Eighty percent of the class chose 'True', even though all parallelograms have four sides. The other teachers told me the question was difficult because it was a T/F question. They said they never give T/F questions because they only confuse the kids. They said I should just forget about T/F and try a different type of question. But it was my class and my time to explore teaching and I knew that this question was not that hard. Several connections became clear in my mind: using the right part of their brains, making the problem about people, and using their imaginations effectively. I wanted to give it a shot.
I planned the next class around answering True/False questions. There would be an experiment to confirm my suspicion (that the kids were using the wrong part of their brains), a lesson using an imaginative process, and then a similar quiz to see how it worked.
The next morning in class, I wrote the T/F question on the blackboard and called a student up to answer it. He read it and said 'True' (the wrong answer). I asked him "what about this part?", pointing to the false part. He was clearly confused. The part about five sides was obviously false to him. He then began looking around through the question and stopped at the first part (the true part). He pointed at it and said 'True', as if it negated the fact that the other part was false. It's hard to describe, but I was convinced that he was simply looking for something that was true to make the whole question true. And he thought that it was the right answer. My hypothesis was confirmed: he was using a visual strategy when it was not called for.
I then demonstrated an imagination process for solving True/False questions. It went like this:
When solving a True/False question, I first imagine someone standing in front of me. He says the statement from the question to me. If he is lying, the answer is False. If he is telling the truth, the answer is True.
I asked a couple of people to carry out the process while narrating it to me. They seemed to be able to do it (and they got it right). So then I gave the quiz.
The result? Correct answers went from 20% to 80%. I felt like I was finally testing their knowledge of the material and not their understanding of test-taking strategies.
How did it work? By converting the problem from a logic skill to a social skill, the students could totally bypass the need to process difficult symbolic rules. And we could solve it as a social problem by using a structured process of imagination.
True/False questions are difficult because there are so many levels of binary confusion. First, you are looking for the correct (as opposed to the incorrect) answer. Then you must determine the truth value of the whole statement, which is a function of the truth values of the sub-statements. It's just a lot of levels to keep in your head.
The imaginative process cuts through all of that and asks one question: is he lying. You are offloading the processing to the social part of your brain, which can easily do it if framed in the right way.
I always hated in school when the teacher would instruct us to use our imaginations to solve problems. Well, hmm. Especially as a kid, my imagination was filled with magic. Not very useful. Not that you shouldn't use your imagination. You should. But as an instruction, I find it lacking. The teachers were just being lazy.
See, the thing I learned through a lot of experiments and reading is that our imagination is very powerful if you use it correctly. But much as our computers can run any possible program, our minds can imagine any situation. The key to using our imagination effectively is to learn to harness the infinite potential and directing it to a purpose.
One set of techniques I studied was called Neuro Linguistic Programming. It is thoroughly interesting, especially the earliest stuff, but avoid the cultish seminars. The early stuff was based on Cybernetics and linguistics (specifically Noam Chomsky's transformational grammar). It was quite rigorous as far as informal studies go. It has since become a new-age movement. Tread carefully if you wish to explore it.
One of the most intriguing aspects of NLP is that it teaches you to discover the structure of a mental process through introspection in terms of the raw sensory experience. That is, what do you see, hear, feel, etc. You break it down into the smallest steps that can be measured. You can then understand your own process and give yourself more control over your own apparatus. It sounds like a structured use of imagination. By deconstructing your imagination and guiding another person through the steps, you can transfer the outline of a skill to someone else.
I know this must sound like magic. But don't we expect teachers to pass skills on to others? Teachers routinely break down skills to teach them in a process known as task analysis. You are simply doing it to your imagination. It's not magic. It's not instant knowledge transfer. Practice and experience still count for a lot. But it can get you pretty far.
If it still sounds like magic, all I'm saying is that you switch from A to B:
A: Pure functions are functions that don't have any side-effects. Use your imagination.
B: To determine if a function is a pure function, look through the function line-by-line. On each line, imagine a green checkmark if the line has no side-effects. Imagine a red X if it has side-effects. When all lines have been marked, if you have any red X's, your function is not pure.
While correct, the first explanation gives very little help to your imagination. In fact, my first response to explanations of type A is to think about what it might mean. But I can use my imagination with effort. When I do that, I realize that I am imagining visually scanning the function (in my mind's eye) and marking lines with side-effects. The second one asks you to imagine just that. Asking someone to go through a process makes it clearer.
NLP does have a lot of merit, especially when it comes to teaching. Our mental processes can be introspected, analyzed, and described. And we can then guide a student through a process similar to what goes on in our heads when we solve the problem. This is the last piece I need to tell the story of one of the most successful things I have ever taught. Next time.