> [guido, explains the history of his nefarious abuse]
> ...
> Amoeba only had (has?) mutexes and we got quite clever in abusing them
> for other synchronization constructs like signal/wait and (counting)
> semaphores.
>
> I guess it's time to get modern and add condition variables to
> thread.c and threadmodule.c so we don't HAVE to write code like that
> -- even if it's KSR's fault and not POSIX', I still agree that it's
> bad writing style (even though it's lean and mean -- it doesn't
> express what you want to happen).
I actually like the current Python lock semantics: it's fine by me if you
want to try to acquire a lock you already hold! I personally think it's
an elegant (ya, "lean and mean") approach to many problems.
KSR hardware actually has the Python lock semantics built in: a Python
lock acquire _could_ map to a single
gsp.wt %register # get atomic state on subpage whose address
# is in % register; ".wt" = wait for it
instruction on this machine, and a Python lock release a single
rsp %register # release atomic state on subpage at %register
instruction. That's why I'm pretty sure POSIX demanded that
pthread_mutex not support it: it had to be extra work for us to detect, &
gripe about, trying to acquire a lock you already have (our HW doesn't
mind a bit).
Whatever,
1) The 2nd thread.c patch is a constructive proof that std POSIX gimmicks
(misimplemented by us or not) _do_ suffice to implement Python locks
efficiently; it's just a little tricky to accomplish.
2) You're certainly right that using Python mutexes for _everything_
can be unclear. It occurred to me today that, in the "release"
implementation, I used pthread_cond_signal almost subconsciously,
because that's the behavior most appropriate for the way locks are
used in the Generator class (what I was _really_ interested in <wink>);
but if a mutex was actually being used to control, say, a barrier
exit, pthread_cond_broadcast would have been more appropriate. So the
"say what you mean" bit affects the implementation, too.
3) If you want to implement conditions, I recommend the POSIX model
highly, where an additional mutex is part of the protocol (&
_required_ on a "wait" call, along with the condition). It goes like:
A) The thread that's waiting for some arbitrarily-complex condition
(ACC) to become true does:
mutex_vrbl.acquire()
while not (code to evaluate the ACC):
condition_vrbl.wait(mutex_vrbl)
# That blocks the thread, *and* releases mutex_vrbl. When
# a condition_vrbl.signal() happens, it will wake up
# some thread that did a .wait, *and* acquire the mutex
# again before .wait returns.
# Because mutex is locked, the state used in evaluating
# the ACC is frozen at this point, so it's safe to go
# back & reevaluate the ACC
# At this point, ACC is true, and mutex is locked by the thread.
# So code here can safely muck with the shared state that
# went into evaluating the ACC -- if it wants to.
# When done mucking with the shared state, do
mutex_vrbl.release()
B) Threads that are mucking with shared state that may affect the
ACC do:
mutex_vrbl.acquire()
# muck with shared state
mutex_vrbl.release()
if it's possible that ACC is true now:
condition_vrbl.signal() # or .broadcast()
This protocol is powerful & easy to use (if not easy to remember after
a long break from it <wink>); programming with conditions can be very
tricky otherwise. Having the .wait method release (on entry) &
acquire (on exit) the mutex as part of its atomic operation is key.
of-making-many-books-there-is-no-end-and-much-study-is-a-weariness-
of-the-flesh-ly y'rs - tim
Tim Peters tim@ksr.com
not speaking for Kendall Square Research Corp