Re: Why are intepreters so slow today

Joe Armstrong (
18 Apr 1994 16:08:10 GMT

In article <>, (John Nagle) writes:
|> Lately, I've been looking at interpreters suitable for use as
|> extension languages in a control application. I need something that
|> can do computation reasonably fast, say no worse than 1/10 of the
|> speed of compiled C code. Interpreters have been written in that speed
|> range quite often in the past. But when I try a few of the interpreters
|> available on the Mac, performance is terrible.
|> My basic test is to run something equivalent to
|> int i; double x = 0.0;
|> for (i = 0; i < 1000000; i++) x = x + 1.0;
|> The Smalltalk and Python versions are slower than the C version by factors
|> of greater than 1000. This is excessive. LISP interpreters do a bit
|> better, but still don't reach 1/10 of C. What interpreters do a decent
|> job on computation?
|> John Nagle

Well one reason could be that Smalltalk usually gets integer
arithmetic right and C doesn't. Multiply two integers in C which
overflow 32 bits and you get the wrong answer - many interpreted
languages coerse automatically to bignums and get the answers right.

Another reason could be the lack of type information - lisps
etc. are unsually untyped, so the contents of a variable can change
type during execution. This involves a deal of run-time tag checking -

C compilers can make a good job of compilation because the
programmers have to specify a lot of staggeringly boring little
details - this is why C programming is fun it takes a deal of time
trouble and skill to get all the details right.

It depends what you want - speed of execution or ease of writing.

Since my machine is blindingly fast I'd go for ease of writing!