Well, interpreters today are generally are designed for something
other than raw computational power (especially floating point). They
can be designed for easier debugging before compiling (ie, most LISPs
come with a compiler as well), or to be efficient at certain
operations, or just to be more convenient. In fact, it's even hard to
say what's a compiler and interpreter anymore (many "interpreters"
compile to bytecodes, and Self compiles a method the first time it is
used, etc).
If computation speed is important, then consider having the
extension be done via compiled code (ie, recompile the application
or allow dynamic linking). Or provide more primitives for the
extension language to call. Or write your own interpreter. Or
get one of those old interpreters that you thought were faster
than modern ones and adapt it.
I find it amusing the 1/10th of C argument. You may find that
difference in the same C compiler with and without optimization
turned on. (with a quick test, I got a 5 times difference)
A Forth version (implemented in C, not assembler), was only
3 times slower than an unoptimized C version.
Then again, maybe interpreters seemed better because C compilers
used to be worse :-)
-- Darin Johnson djohnson@ucsd.edu Ensign, activate the Wesley Crusher!