Re: Why are intepreters so slow today
15 Apr 94 17:40:59 GMT

In <> (John Nagle) writes:

> Lately, I've been looking at interpreters suitable for use as
>extension languages in a control application. I need something that
>can do computation reasonably fast, say no worse than 1/10 of the
>speed of compiled C code. Interpreters have been written in that speed
>range quite often in the past. But when I try a few of the interpreters
>available on the Mac, performance is terrible.

> My basic test is to run something equivalent to

> int i; double x = 0.0;
> for (i = 0; i < 1000000; i++) x = x + 1.0;

>The Smalltalk and Python versions are slower than the C version by factors
>of greater than 1000. This is excessive. LISP interpreters do a bit
>better, but still don't reach 1/10 of C. What interpreters do a decent
>job on computation?

You don't specify which Mac Smalltalk implementation you tested.

I tried the following on a Sparc 10/30 running SunOS 4.1.3:

C program:
#include <sys/types.h>
#include <sys/time.h>

int i;
double x = 0.0;

time_t start, stop;

start = time(0);
for (i = 0; i < 100000000; i++)
x = x + 1.0;
stop = time(0);

(void) printf("%d seconds\n", stop - start);

return 0;

Smalltalk workspace code:

(Time millisecondsToRun: [ | x | x := 0.0d.
100000000 timesRepeat: [x := x + 1.0d]]) asFloat / 1000.0

When compiled with "/bin/cc -O" the C program took 8 to 9 seconds
in 5 tries. When compiled without optimization, it took 37
seconds in each of 3 tries.

The Smalltalk code, in 3 tries, took 250-252 seconds, making
it about 30 times slower than the optimized C program and about
7 times slower than the unoptimized C.

Note that like Self, ParcPlace Smalltalk compiles to native
machine code the first time you run the Smalltalk code, then
thereafter executes the native code from an in-memory cache.

Michael Khaw (or
ParcPlace Systems, Sunnyvale, CA	PRODUCT INFO: