After getting the http/url libraries up and running on NT, I wanted to use
exception handling to catch socket timeouts and the like. So, in the following
code (that Guido posted for me), how would I catch say a socket timeout (which
I believe the http/url libs are throwing at some point)?
I believe the try would go around the 'f = urllib.urlopen(url)', but I am not
sure what the 'except' would look like...
-----------------
#! /usr/bin/python
# copy the url given as sys.argv[1] to the file sys.argv[2]
# (default sys.stdout)
import urllib
import sys
def main():
# Handle arguments
url = "http://gnn.com/gnn/news/comix/graphics/dilbert.html"
print url
if sys.argv[1:]:
file = sys.argv[1]
else:
file = None
# Open connection and file
f = urllib.urlopen(url)
if file:
g = open(file, 'wb')
else:
g = sys.stdout
# Copy data
while 1:
data = f.read(1024)
if not data: break
g.write(data)
# Close file and connection
if g <> sys.stdout: g.close()
f.close()
main()
-- | "I don't like being bluffed -- makes me doubt | rjf@aurora.pcg.com | | my perception of reality..." | 71722,3175 | | Chris in the morning on KBHR | | +---------------------------------------------------+-----------------------+