Re: Why are intepreters so slow today

Alan Knight (knight@mrco.carleton.ca)
Fri, 15 Apr 1994 13:49:33 GMT

In <nagleCoACH4.25p@netcom.com> nagle@netcom.com (John Nagle) writes:

<...looking for fast interpreter...>
> My basic test is to run something equivalent to

> int i; double x = 0.0;
> for (i = 0; i < 1000000; i++) x = x + 1.0;

>The Smalltalk and Python versions are slower than the C version by factors
>of greater than 1000. This is excessive. LISP interpreters do a bit
>better, but still don't reach 1/10 of C. What interpreters do a decent
>job on computation?

For one thing, this is a fairly pathological benchmark. I can't speak
for Python, but none of the current Smalltalk implementations do very
much optimization of floating point arithmetics. Smalltalk/V Mac, in
particular, has abysmal floating point (you don't say which dialect
you tried).

I'd suggest you try a more comprehensive suite of tests. Try doing
integer math, for one thing.

I believe it would be roughly accurate to say that a decent Smalltalk
should be in the range of 6 to 10 times slower than C for small
samples of "normal" code. This could be much worse for code with lots
of array accesses (which are always bounds-checked in Smalltalk) or
floating point arithmetic. People argue that this performance
difference evens out dramatically on larger code samples, but this is
much harder to prove.

The major factors in this are not interpretation. Smalltalk code
normally has many more procedure calls than "equivalent" C code, and
does much more safety checking (array bounds, integer overflow, etc.).

-- 
 Alan Knight                | The Object People
 knight@acm.org             | Smalltalk and OO Training and Consulting
 alan_knight@mindlink.bc.ca | 509-885 Meadowlands Dr.
 +1 613 225 8812            | Ottawa, Canada, K2C 3N2