Re: Why are intepreters so slow today (longish)

Mike Gertz (emgertz@scripps.edu)
16 Apr 1994 00:01:24 GMT

In article <nagleCoACH4.25p@netcom.com> John Nagle, nagle@netcom.com
writes:
> Lately, I've been looking at interpreters suitable for use as
>extension languages in a control application. I need something that
>can do computation reasonably fast, say no worse than 1/10 of the
>speed of compiled C code. Interpreters have been written in that speed
>range quite often in the past. But when I try a few of the interpreters
>available on the Mac, performance is terrible.
>
> My basic test is to run something equivalent to
>
> int i; double x = 0.0;
> for (i = 0; i < 1000000; i++) x = x + 1.0;
>
>The Smalltalk and Python versions are slower than the C version by
factors
>of greater than 1000. This is excessive. LISP interpreters do a bit
>better, but still don't reach 1/10 of C. What interpreters do a decent
>job on computation?
>
> John Nagle

Actually, haven't I seen your name a lot in comp.sys.mac.programmer. If
so then this info will be meaningful to you.

I was curious after reading your post, so I duplicated your test myself on
MCL (Macintosh Common Lisp) and Smalltalk Agents (from Quasar Knowledge
Systems).
The numbers don't seem as bleak as you are saying. (Of course something
could
be wrong with my programs, but we won't mention that.)

I should say first, that the test you are running is a particularly hard
test
for these systems. On these systems, floating point numbers are allocated
on
the heap, so you get the overhead of memory allocation and gc. A slowdown
in
floating point computation therefore does not necessarily indicate an
overall
slowdown. In particular, integer math can be much faster.

Anyway here are the results.
MCL:

(defun add-test()
(let ((x 0.0))
(dotimes (i 1000000)
(declare (fixnum i))
(setf x (+ x 1.0)))))

? (without-interrupts (time (add-test)))
(ADD-TEST) took 5385 milliseconds (5.385 seconds) to run.
Of that, 16 milliseconds (0.016 seconds) were spent in The Cooperative
Multitasking Experience.
1076 milliseconds (1.076 seconds) was spent in GC.
8000064 bytes of memory allocated.

I didn't have much success optimizing this further.

C:
#include <stdio.h>

#include <types.h>
#include <time.h>

main()
{
long i;
long double x = 0.0;
long stop,start;

printf ("ho\n" );

start = TickCount();
for (i = 0; i < 1000000; i++)
x = x + 1.0;
stop = TickCount();

printf("%ld ticks\n", stop - start);

return 0;

}

the result was 49 ticks = 816 ms

This was compiled with optimizations using Think C with '20 and '881
instructions and
native floating point format turned on.

My first time running this program, using the factory settings, I got 192
ticks.

STA:
[ | x |
x := 0.0.
1 to: 1000000 do: [
x := x + 1.0.
].
] millisecondsToRun.
14536

Which is less than 1/10 the speed of C, but not too bad.

This is all on a Quadra 840AV.

--Mike Gertz