Well, I don't think this strategy is such a bad idea. I haven't
bothered to look into the debugger myself, for lack of time (and
also because I think good debuggers sometimes encourage me to write
bad code).
Some tricks:
Modularize your code heavily and (during development)
add an acceptance test for each component at the bottom of
the file. This is great for debugging/regression testing,
and I also find it useful as a "mini-reference" before I'm
finished documenting everything.
For complex computations make the _computation_ itself into an
_object_, encapsulating all the state information. Make each
step in the computation a method. For example in a parser
generator I defined a ParseObject with the major steps of parsing
- initialization
- one reduction step
- goto state computation
all as methods and the string, stack, etcetera, all as
members. This way I could "run" a parse by hand and look
at each intermediate result to see if it was working properly.
Of course, both tricks add overhead, but slow code is better than
broke code, and once the thing works you can "demodularize it"
(pretty easily, thanks to Python's very clean syntax).
Just a thought -a.
====
Great lyrics #34:
Maybe someone kicked you around some
Who knows? Maybe you were tied up taken away and held for ran-some
-Tom Petty.