if debug > 0:
do a dirt-cheap sanity check
if debug > 9:
do a pretty cheap test; e.g., a simple function pre- or post-assertion
if debug > 99:
do a possibly-expensive test (e.g., the argument really is a list of
(string,float) 2-tuples).
if debug > 999:
print interesting trace output
if debug > 9999:
do insanely expensive consistency checks, perhaps crawling over
every data structure in the application
So the higher "debug" is set, the more help (and the greater the
slowdown!) you get when debugging; & when debug==0, you get no help at
all (but still pay something for the "debug > N" tests anyway).
In Python this naturally yields a scheme where __builtin__ supplies a
universally-visible value for "debug", which can be overridden
independently on a module-by-module basis at will (Python looks up
"debug" first in the module NS, and goes to __builtin__ only if it's not
found).
Doing that much yields a pretty pleasant scheme, and doesn't require any
changes to Python. Then again, I tend to define only one class per
module, so "class-level" and "module-level" debugging generally mean the
same thing to me.
With some direct language support, the scheme gets better. E.g., suppose
there were a new "'debug' int_literal:" block, as in
debug 999:
print interesting trace output
compiling to (in effect)
if __debug__ > 999:
print interesting trace output
and __builtin__.__debug__ were settable via command-line option,
defaulting to 0.
That's a little more convenient, but the real point is that by stuffing
the debug code under a new _statement_, Python (or a _simple_
preprocessor) could easily detect debug code when it saw it, and a
compile-time switch could be defined that means "ignore all debug blocks
whose int_literal is >= NNN". I.e., don't even compile code for them (by
default NNN would be "huge", so that no debug code would be tossed, while
specifying 0 would toss all debug code).
A variant is along the lines of
if debug & 0x1:
do a dirt-cheap sanity check
if debug & 0x80:
do an insanely expensive test
if debug & 0x40:
print trace output
etc. I.e., it provides finer control by allowing any subset of "debug
flags" to be set. OTOH, assignments of meanings to flags are much better
done via symbolic names for the flag bits, and then there's little Python
could do to weed debug blocks out at compile-time unless more mechanism
were introduced to teach the compiler the flags' names and values. The
user's code needs to get at those names too, so perhaps they could be
defined in a conventionally-named module that got special-cased by the
compiler.
On the third hand, no two people would agree on the _contents_ of that
module, so it would all turn into a mess. The int_literal version at
least has the advantage that "bigger means more burdensome", enforcing
_some_ common view of the world.
I can live without this, but think the "int literal" version has an
attractively high benefit/cost ratio. If it were already implemented,
I'd be happy to stick argument checks in debug blocks.
One thought on argument-checking in general: If a passed argument really
is the "wrong type", Python will usually raise _some_ sort of exception
all by itself, when the argument is used in a senseless (for it)
operation. So I see the primary point of explicit argument-checking as
being a means to provide a user with a clearer error msg. But someone
who knows Python probably doesn't need the help, while someone who
doesn't probably won't be helped by seeing "arg #4 type mismatch in
function shear_the_sheep: hasattr(mindy,'bleat') is false".
In other words, internal errors make no sense to end users no matter
what, naive users need very carefully constructed error messages if a
problem is their fault (i.e., not an "internal error" in the
application), and gonzo users will figure out what went wrong no matter
how obscure the msg (do I hear a vote for "silo overflow" <0.9 grin>?).
unto-the-root-this-day-is-born-a-brother-ly y'rs - tim
Tim Peters tim@ksr.com
not speaking for Kendall Square Research Corp