Re: Overloading function calls

Tim Peters (
Wed, 01 Jun 94 05:33:22 -0400


> What we need is the right amount of magic!

Bingo, twice over.

> > [__len__ vs __nonzero__]

> I would like to stick to the current behavior that if __nonzero__ is
> not defined but __len__ is, __len__ is used to determine truth/falsehood.

This must be Good Magic, because I see now that _I've_ <ahem> used it
often, and didn't even consciously notice it.

> It is already the case that __nonzero__ overrides __len__, so if you
> want them to be different you can do that.

Ah! That's exactly what I was suggesting. This distinction didn't make
it into the docs (they describe __nonzero__ as just being an "alternative
name" for __len__). Exploratory Programming shows that one more thing is
true: if x.__nonzero__ is defined but x.__len__ isn't, len(x) yields an
AttributeError. So one vote for leaving len/nonzero exactly as they are!

> > [__cmp__ vs individual relationals]

> More precisely: if __eq__ exists, use it, else if __cmp__ exists, use
> it, else raise an exception; likewise for the other operators.

Fully agreed, except for the "else raise an exception" part. I'm
skeptical that you intended to say that, though: Python currently
compares any two objects (even instances of classes that don't define
__cmp__), and I hope it continues to "by default".

I.e., if none of __eq__/__lt__/etc are defined, and __cmp__ isn't
defined, then a relational operator maps to the default general object
comparison. Do you really want to change that?

> There's something to say for not using __ne__ to override !=. I think
> this should always be defined as the inverse of __eq__ (or at least
> default to that).

How do we solve this? You're making a semantic decision for the user
here, and the "problem" is it's the same one I'd make. I really don't
want people to be _able_ to write code where "a == b" is not the negation
of "a != b", and I'd even go so far as to suggest not implementing __ne__
at all, as a way to partially enforce it ("partially" == can't stop
__cmp__ from lying).

So three choices, in order of decreasing paternalism/fairy-godperson-ism:

1) __ne__ doesn't exist; "!=" maps to "not __eq__", else to __cmp__
!= 0, else ???. [tim, in benign dictator mode]

2) "!=" maps to "__ne__", else to "not __eq__", else to __cmp__ != 0,
else ???. [guido, in cut-off-any-possible-complaint mode]

3) "!=" maps to "__ne__", else to __cmp__ != 0, else ???. [nobody
yet, in rigid consistency mode]


> I'd also propose having an explicit __in__ method that might define a
> quicker implementation of the 'in' operator ... __not_in__ is likewise
> unnecessary since it can call __in__ and reverse the result.

So three similar choices:

1) __not_in__ doesn't exist; 'not in' maps to "not __in__", else to ???

2) 'not in' maps to __not_in__, else to "not __in__", else to ???

3) 'not in' maps to __not_in__, else to ???

"???" is a bit of a different question, in this case, because given the
possibility of __eq__ methods, the default "in" logic may want to use the
__eq__ method if the class defines __eq__. One danger of "magic" is that
it propagates in not-entirely-obvious ways, eh?

One other: in

if obj1 in obj2:

to which object is __in__ directed <0.9 grin>? obj2.__in__ seems right
to me, since it's presumably the "structured" object; perhaps unfortunate
that __in__ then sees its arguments in right-to-left order, but no big

> > ... if __coerce__ exists, Python should be satisified if it returns
> > any 2-tuple whatsoever whose 1st component is a class instance.

> OK. Then should it still be called if the two instances are already
> of the same type and class? (I'd say yes.)

Interesting question! I say yes too, cuz it's predictable, and isn't
bviously a help not to do it. And a clear possible use is for a
complicated numeric type's __coerce__ to note when its arguments are
really degenerate cases, and "down cast" them to a simpler type (thus
avoiding the expense of the complicated type's presumably-elaborate
general-case operation).

> > + One simplifying option I haven't thought enough about: ...
> > [tedious description deleted, of a scheme that invoked __coerce__
> > implicitly only for magic operations]

> If I understand you correctly this would break things like
> 3*some_complex_number (where currently the LHS is coerced to complex
> before __mul__ is called).

Let me withdraw that; I thought enough about it since <wink>.

BTW, in Python today it's not the case that 3*some_complex_number coerces
the LHS, right? Today the __coerce__ method is ignored for __mul__, and
some_complex_number gets invoked with the LHS & RHS objects swapped but

So I broke "3*some_complex_number" anyway, by taking away the __mul__
magic, but hoped to get it back via some general scheme for letting an
arbitrary binary operator's RHS object grab control.

If Python stops checking the type of coerce's result's 2nd component,
then e.g. in "3*matrix", matrix.__coerce__(3) can return (matrix,3)
without losing the valuable info that "3" is a plain integer, so that's
fine so far as it goes.

But (a) I'm not sure it's a good idea for a "RHS object grabs control"
scheme to _need_ a __coerce__ method to work; and (b) for non-commutative
operations the swapping isn't harmless.

So this is pretty complicated stuff! The current scheme doesn't work out
well in practice, & I'm trying both to improve that _and_ to extend its
power to other binary operators. A scheme that adds a new spelling for
each binary-operator special method (meaning "this is the version invoked
when the object was on the RHS, and the arguments have been swapped") is
unattractive on the face of it. But a scheme that passes a magical
"arguments swapped?" flag to a single spelling of the operator method is
incompatible with all current binary-operator method code.

So I haven't yet thought of something reasonable enough to suggest
(despite that I thought I did last time I wrote <wink/sigh>).


sure-ain't-trivial!-ly y'rs - tim

Tim Peters
not speaking for Kendall Square Research Corp