Re: Internal Objects: method vs function/procedure vs both

Jim Roskind (
Fri, 1 Apr 1994 23:14:56 +0800

> Date: Fri, 01 Apr 94 23:32:46 EST
> From: Tim Peters <>
> > I think it is much easier to read either:
> >
> > x.h().g().f().e().d().c()
> >
> > (which *I* think is real easy) or:
> >
> > c(d(e(f(g(h(x))))))
> >
> > (which isn't too bad, but makes my eyes bug a bit at the end) than to
> > read:
> >
> > c(e(g(x.h()).f()).d())
> >
> > which is beyond my visual comprehension.
> Mine too. But you caricature the case by presenting a pathologically
> complicated mixture of styles there -- if "sort(foo.keys())" is beyond
> your limit of easy visual comprehension, you're the only person I've ever
> met who can't count higher than I can <0.6 grin>.

Bzzzzzt. Sorry. Cheap comments about IQ don't win arguments. :-]

> > My point is that having the ability to stick with a single metaphor is
> > very helpful in readability.
> If you're in the (bad!) habit of cramming a dozen invocations (of methods
> and/or functions) on a single line, I would agree.

Bzzzzt. Sorry. Cheap comments about coding style without any basis
in fact don't win arguments. ya' gotta get back to the issue.

> If you're more
> considerate of future readers and generally stick to at most a few
> invocations on a single line, I don't think mixing metaphors truly is a
> fly in the ointment ...

I listen to readers: past, present and future. Mixing metaphors is
bad. It's that simple.

> > ...
> > 'cause you propose (on would-be equal footing):
> > > for i in sort(foo.keys()):
> > > ...
> You _truly_ believe that's hard to read?

No, its not "hard" in an absolute sense. It is simply "harder" to
read than necessary, and *much* harder for a beginner to write. The
unending question (for a beginner such as myself) is: Is builtin
blob() a method or a function? Hmmm... well sort() is a method, and
len() is a func, and keys() is a method, and hasattr() is a func,
has_key() is a method, and repr() is a func (unless its on a user
defined object, and then __repr__ is a method), and ...

Consistency has many paybacks. Reading is one. Writing is another.
Learning is a third.

> ... high-stakes game ...

Bzzzzzt. Sorry, ...

> > and alternately you don't mind the inefficiency of forcing use of
> > throw-away temps, and the addition of two lines of code in :-(:
> >
> > > keys = foo.keys()
> > > keys.sort()
> > > for i in keys:
> > > ...
> I prefer the former, but no, the latter's fine too. It's quite clear,
> naming "throw-away" intermediate temps is a very effective way to make
> complex expressions more readily understandable, ...

and a way to make simple statement complex!

> ... and the "inefficiency" is an illusion

Measure before you speak: it is not an illusion. The following two
functions have significantly different execution times with the
current implementation of python:

def inter():
a = 1 # assign to an intermediate value
b = a

def direc():
b = 1

... but don't try measuring this at home with the standard
distribution profiler. The profiler that comes with python is broken.
(you are welcome to use this as an excuse for not knowing better, and
it is a good excuse). I'll be distributing one that works shortly.
You can measure the above funcs with a simple test, but you've got to
understand what makes Python run fast or slow so that you don't get
the results buried in the noise. Be sure that you use global
statements to declare the functions so that you expedite the call. Be
careful to properly subtract out the time that is spent in the code
that loops around this call. When you're done, you find that the
ratio is about 5:4 for the *total* execution time of these two
functions! If all you want to do is see that there is no illusion to
be had, the following will consistently display the difference in
times for the two functions:

def inter_test(m):
global inter, direc, ostimes
delta = 0
p = 20
while p > 0:
p = p - 1
n = m

s = ostimes()
while n > 0:
n = n - 1
f = ostimes()

t1 = f[0]+f[1] - s[0]-s[1]
print "Inter:", t1,
n = m

s = ostimes()
while n > 0:
n = n - 1
f = ostimes()

t2 = f[0]+f[1] - s[0]-s[1]
print "Direc:", t2, "(", t2-t1, ")"

delta = delta + t1 - t2
print "Total difference:", delta

> (in your preferred "foo.keys().sort()", what do you think
> the "foo.keys()" part returns if _not_ a throw-away temp -- albeit an
> anonymous one? ditto the ".sort()" part? ).

Alas, you don't recognized the distinction between a statically
compiled program, wherein an optimizer can monitor lifetimes of
temporaries and optimize them away, and an interpretted language such
as Python (which in many scenario *cannot* possibly optimize away
temporaries). In the case that I've presented, it is theoretically
possible to optimize away the intermediate assignment. In other more
complex cases, where a competent C compiler *can* do some swift
optimizations, the interpreted nature of Python will preclude such

The speed-strength of an interpreted language such as python comes
from using as much high level code as possible, and relying on
efficient low-level C implementations of builtins to get things done
fast. The trick is to stay away from didling bits (or moving scalars,
such as temporaries) as much as possible. (Please don't waste your
time telling me about the virtues of development in an interpretted
language: I know them. That is why I'm using Python).

The bad news is that the cleanest workaround for the sort of problem
that led to this topic causes the user (such as you) to fall into the
trap of using throw-away temporaries. IF the deficiency we are
talking about is corrected, then there would be no need to write slow

> > I don't argue for *either* style too heartily,
> Then we're halfway to 100% agreement <smile>.
> > but I do argue for the ability to use one style of the other
> > consistently.
> Ya, I do understand the appeal of that. When Guido can't hear me, I
> sometimes tell people that while other languages merely support multiple
> paradigms, Python forces you to _use_ multiple paradigms <snicker>.

I agree.

> Python is at least "fair" in irritating both functional and OO purists
> about equally ... I'm not sure why that doesn't bother me, but strangely
> enough it doesn't. I think it's because I have such an easy time _using_
> Python to get real work done, so find it hard to get excited about its
> putative deficiencies.

I think this last paragraph shows we have much more agreement than
disagreement, it is just perchance that you enjoy the position of a
devils advocate. ;-o

> ... Python
> already _allows_ both [metaphors]
> (which is why I've often posted little functions to
> the list, and Steve's often posted little classes to the list: it's
> usually not hard to _fake_ (simulate) either style more purely -- but
> you often have to build a little (or a lot) of the supporting machinery
> yourself).

It easy for you to say that it is easy to "fake it," since you
provide the function-like versions. Alas, built in types are not full
blown classes, and you *cannot* derive from them. It takes a lot more
work to get the object like metaphor to fly.

> > This would mean that internal types would have both len() methods AND
> > would work as args to the function len(). ...
> Believe it or not <smile>, I probably would too. I guess that because
> there's at least _one_ builtin way to spell "sort" & "len" & so on, I
> figure I've got 95% of what I want, and the other 5% isn't worth carping
> about.

As I said, the agreement may be greater than you'd let on ;-)

> Tim Peters


Jim Roskind