Well, I don't know what's the best strategy for running a Python
program once a day -- probably using cron/crontab, which differs per
UNIX system so I can't tell you much about it. But Python code to
fetch a URL and save it to a file is real easy: the standard library
module "urllib" does almost everything for you. For instance:
#! /usr/local/bin/python
# copy the url given as sys.argv[1] to the file sys.argv[2]
# (default sys.stdout)
import urllib
import sys
def main():
# Handle arguments
if not sys.argv[1:]:
print "usage: geturl url [outputfile]"
url = sys.argv[1]
if sys.argv[2:]:
file = sys.argv[2]
else:
file = None
# Open connection and file
f = urllib.urlopen(url)
if file:
g = open(file, 'w')
else:
g = sys.stdout
# Copy data
while 1:
data = f.read(1024)
if not data: break
g.write(data)
# Close file and connection
if g <> sys.stdout: g.close()
f.close()
main()
--Guido van Rossum, CWI, Amsterdam <mailto:Guido.van.Rossum@cwi.nl>
<http://www.cwi.nl/cwi/people/Guido.van.Rossum.html>