Re: Why are intepreters so slow today

Steven D. Majewski (sdm7g@elvis.med.virginia.edu)
Tue, 19 Apr 1994 12:14:12 -0400

On Apr 19, 9:54, Graham Matthews wrote:
>
> Steven D. Majewski (sdm7g@elvis.med.Virginia.EDU) wrote:
> > [on why interpreters are slow]
> > They make the points:
> > Compilers can agressively inline code to avoid procedure calls.
> > Object oriented languages ( and, I would assert, interpreted
> > languages in general ) use function/method calls for low-level
> > operations.
> > High call frequencies interfere with good performance.
>
> Inlining can be done and is done in OO languages and interpreted languages.
> (don't confuse the semantics of OO languages and abstraction with the
> implementations of such languages). Your conclusion does not therefore
> follow.
>

Sorry. You misunderstood me. ( But maybe I wasn't very clear? )
Yes - I know inlining and other techniques CAN be done - I was
quoting from one of the Self papers where, after stating those
points, they go on to describe a system that manages to get within
1/2 the performance of C. But that is a research system that has
limited portability ( Not in theory, I'm sure, but it hasn't YET
been ported to many other machines. ). "Common practice" is represented
by the enormous range of numbers reported in this thread.

I would *like* to see the techniques used in Self, and other systems
( Forth is another semi-compiled system which is rather easy to
inline. ) *become* "common practice" .

> Steven D. Majewski (sdm7g@elvis.med.Virginia.EDU) wrote:
> > So, I think that it may be valid to say that compilers have
> > gotten much better, while interpreters, on average, have not.
>
> I disagree. Interpreters have improved out of sight, by doing all the
> things that compilers do, to the point of actually compiling some
> or all of the source code. Indeed they have got so good, that it is
> no longer clear where the demarcation between compiler and interpreter
> is.
>

Note the phrase "on average".
The C compiler market is pretty competative. I haven't seen any
benchmarks lately that are orders of magnitude different. We *have*
just had interpreter benchmarks posted that vary over a range of
1000x.

[ on "pathelogical arithmetic" ]
>
> As far as I can see its only a pathological example for an interpreter
> that doesn't have a sophisticated implementation (for example does not
> support floats very well, does no optimisation phases, does not compile
> to at least byte code, etc).
>

The point was that even for an interpreted that does those sorts of
optimizations ( Python, for example, compiles to byte-code, has good
float support, optimizes local variables access. ) the example posted
( x = x + 1.0 looped a million times ) is probably a worst case
comparison, since so many of the primitives which could be compiled
into single machine instructions become function calls.
The comparison will look better when you do tests where the "primitives"
are further away from the granularity of a single machine instruction.
( For example, count the number of unique occurances of words in a
text, sort by frequency (primary) and alphabetically ( secondary ),
and write the output. The primitives here - splitting strings into
words, entering them into a dictionary, or incrementing the count
if they are already there, sorting lists - are primitive operations
in Perl/Python/etc., and they are going to look much better when
benchmarked against an equivalent C program than counting to a
million! ( And that example also emphasizes the "time-to-develop"
advantage of those High Level Languages over the C implementation. )

-- Steve Majewski (804-982-0831) <sdm7g@Virginia.EDU> --
-- UVA Department of Molecular Physiology and Biological Physics --
-- Box 449 Health Science Center Charlottesville,VA 22908 --
[ "Cognitive Science is where Philosophy goes when it dies ...
if it hasn't been good!" - Jerry Fodor ]