Re: Why are intepreters so slow today

Lawrence G. Mayka (
Thu, 21 Apr 1994 14:28:07 GMT

In article <2oub9a$> (Joe Armstrong) writes:

In article <>, (John Nagle) writes:
|> Lately, I've been looking at interpreters suitable for use as
|> extension languages in a control application. I need something that
|> can do computation reasonably fast, say no worse than 1/10 of the
|> speed of compiled C code. Interpreters have been written in that speed
|> range quite often in the past. But when I try a few of the interpreters
|> available on the Mac, performance is terrible.
|> My basic test is to run something equivalent to
|> int i; double x = 0.0;
|> for (i = 0; i < 1000000; i++) x = x + 1.0;
|> The Smalltalk and Python versions are slower than the C version by factors
|> of greater than 1000. This is excessive. LISP interpreters do a bit
|> better, but still don't reach 1/10 of C. What interpreters do a decent
|> job on computation?
|> John Nagle

Well one reason could be that Smalltalk usually gets integer
arithmetic right and C doesn't. Multiply two integers in C which
overflow 32 bits and you get the wrong answer - many interpreted
languages coerse automatically to bignums and get the answers right.

Another reason could be the lack of type information - lisps
etc. are unsually untyped, so the contents of a variable can change
type during execution. This involves a deal of run-time tag checking -

C compilers can make a good job of compilation because the
programmers have to specify a lot of staggeringly boring little
details - this is why C programming is fun it takes a deal of time
trouble and skill to get all the details right.

It depends what you want - speed of execution or ease of writing.

Since my machine is blindingly fast I'd go for ease of writing!

The compromise is to add type declarations in the very few tight loops
(such as the one above) where programs typically spend most of their
time but very little of their functionality. I compiled and ran the
following on a Sparc LX:

main ()
int i;
double x = 0.0;
for (i = 0; i < 1000000; ++i)
x = x + 1.0;
return x;

The user time was 0.61 sec.

I then compiled and ran the following on a Common Lisp implementation
on that same Sparc LX:

(defun my-float-loop-double ()
(declare (optimize (speed 3) (safety 0) (debug 0) (space 0)))
(do ((i 0 (the fixnum (1+ i)))
(x 0d0 (the double-float (+ 1d0 x))))
((>= i 1000000) x)
(declare (fixnum i) (double-float x))))

The user time was 0.23 sec. The Common Lisp code was close to three
times faster! The function returns the correct value, and its
disassembly includes the telltale "add.d %f30,%f28,%f30", branch
instructions, etc., so it's not optimizing away the loop.

        Lawrence G. Mayka
        AT&T Bell Laboratories

Standard disclaimer.