Re: Why are intepreters so slow today

Steven D. Majewski (sdm7g@elvis.med.Virginia.EDU)
Mon, 18 Apr 1994 21:20:02 GMT

In article <DJOHNSON.94Apr15021813@arnold.ucsd.edu>,
Darin Johnson <djohnson@arnold.ucsd.edu> wrote:
>
>Then again, maybe interpreters seemed better because C compilers
>used to be worse :-)

Although the Self papers are about a "Pure" Object Oriented
language specifically, and one that is dynamically compiled,
and not strictly interpreted: (1) A lot of the assumptions
about a Dynamic Object Oriented Language hold for interpreters
in general -whether they are Object-Oriented or not, they
often need to do a lot of dynamic binding and name resolution,
( although, the more static those bindings are, the easier it
is to pre-compile them once, as in Forth which distinguishes
between the outer "text" interpreter, and the inner threaded-code
interpreter. ) - and (2) I think in this discussion, what most
folks mean is an interpreted style or interactive language,
not necessarily a interpreted implementation, and they accept
dynamic compilation into byte-code/threaded-code/native-code
as a typical implementation.

They make the points:

Compilers can agressively inline code to avoid procedure calls.

Object oriented languages ( and, I would assert, interpreted
languages in general ) use function/method calls for low-level
operations.

High call frequencies interfere with good performance.

Some of the cases where people have pointed to an almost parity of
interpreted and compiled code have been: (1) where the C routines
are themselves interpreted ( FP emulation and regular expressions -
most regex code 'compiles' a finite state machine representation
of a string pattern, which "interprets" the input string. ) or
where the interpreters built-in functions do pretty much the
same thing that the compiler C functions would do ( regular
expressions, again, is a good example. An interpreter calling
a regex subroutine as only slightly more overhead, amortized
over a large benchmark, than a C program calling a regex subroutine.)

Also: If you look a some of the early benchmarks comparing RISC
and CISC processors, you see a pattern that some types of code
benchmarked around 10x better on a RISC, while other types of code
benchmarked in something like a 0.7 to 4.0 range. ( This is from
memory - I don't have the papers on hand, but I remember the
average is probably close to 2x ). The sorts of program that
were at a relative disadvantage were Lisp benchmarks and unix
utilities like awk and sed that do a lot of interpretation.

The conclusion I drew from those (and other) numbers was that
the added indirection of interpreted and/or object-oriented
code did not benefit nearly as much as the typical spec type
benchmarks from the added speed of a RISC.

There have been numerous thread in comp.arch. complaining that
the added indirection of shared libraries are a bad thing, just
because they also add an extra level of indirection on every
procedure call. ( Well - not necessarily, if they are implemented
"Right" :-)

I would also point to the many discussions on comp.arch contra
stack oriented processors - where the problems with stack processors
taking advantage of some of the latest compiler optimization techniques
were raised. The virtual machines of most interpreters are stack
oriented. [ The very reason I and others in that thread insisted
that given those disadvantages, we would STILL like to see better
stack architectures - they may benchmark C code worse that current
RISC's, but *if* they run our interpreters and O-O code better, *then*
that would be a worthwhile trade. ]

So, I think that it may be valid to say that compilers have
gotten much better, while interpreters, on average, have not.

Still, those Self papers seem to indicate that 10x penalty ought
to be the "typical" penalty, and that, with some heroic tricks
one ought to be able to do much better. [ Maybe I'm misreading
this, but 10x is what they report for Smalltalk and an earlier
version of Self. ]

We have seen numbers all over the place for this example.
Arithmetic *IS* a pathological case for some of these interpreted
languages, since they were designed to handle much more complex
tasks. Asking them to add up a million numbers is a interesting
data point to have, but it is surely at the extreme end of
the sort of tasks we can benchmark. At the other end, a task
that show off the capabilities of Perl/Python/Lisp/Smalltalk/etc.
will produce quite a non-trivial C benchmark program.

We desparately need some better benchmarks!

Does anyone know what the Self folks used ?

They mention the "Richards" benchmark, but the reference in
the back says "private communication".

What is in the Gabriel Lisp Benchmarks ? Maybe something
there can be adapted to a high-level "language neutral"
benchmark suite.

[ On that closing, comp.benchmarks added to the Newsgroups line!
and self-interest Cc-ed ]

-- Steve Majewski (804-982-0831) <sdm7g@Virginia.EDU> --
-- UVA Department of Molecular Physiology and Biological Physics --
-- Box 449 Health Science Center Charlottesville,VA 22908 --
[ "Cognitive Science is where Philosophy goes when it dies ...
if it hasn't been good!" - Jerry Fodor ]

[ The specific Self papers I'm refering to are:

Chambers, Craig and David Ungar: "Making Pure Object-Oriented
Languages Practical" OOPSLA91

Ungar,David, Randal B. Smith, Craig Chanbers & Urs Holzle:
"Object, Message, and Performance: How they coexist in SELF"
IEEE Computer October 1992.

Both ftp-able from self.stanford.edu. ]