Re: Why are intepreters so slow today

Alan Knight (knight@mrco.carleton.ca)
Sun, 17 Apr 1994 00:27:38 GMT

In <nagleCoD5o6.CD6@netcom.com> nagle@netcom.com (John Nagle) writes:

>knight@mrco.carleton.ca (Alan Knight) writes:
>>For one thing, this is a fairly pathological benchmark. I can't speak
>>for Python, but none of the current Smalltalk implementations do very
>>much optimization of floating point arithmetics. Smalltalk/V Mac, in
>>particular, has abysmal floating point (you don't say which dialect
>>you tried).

> Arithmetic is a pathological benchmark. Right.

> John Nagle

For floating point arithmetic in Smalltalk/V Mac, yes. It performs far
worse than any other aspect of that system (not to say that V/Mac's
performance is great in general, but it's floating point arithmetic is
abysmal). Unless your application consists primarily of tight loops of
floating-point arithmetic, this is a misleading benchmark.

<In relation to other posts>

It is interesting to note the wide variety of reasons that people can
come up with to explain performance differences. We have posts from
people explaining why fast interpreters no longer exist and others
posting their own one-line benchmarks from fast interpreters.

As a point of information, ALL the commercial Smalltalk's are at least
byte-code interpreted. Most commercial implementations also use a
technique called "dynamic compilation" where the byte-codes are
compiled all the way to machine codes for some methods, which are kept
in a cache. This is not done for all methods in order to conserve
space. I believe V/OS2 once fully compiled all methods, but that this
has been changed.

-- 
 Alan Knight                | The Object People
 knight@acm.org             | Smalltalk and OO Training and Consulting
 alan_knight@mindlink.bc.ca | 509-885 Meadowlands Dr.
 +1 613 225 8812            | Ottawa, Canada, K2C 3N2