skip to navigation
skip to content

Python Wiki

Python Insider Blog

Python 2 or 3?

Help Fund Python

[Python resources in languages other than English]

Non-English Resources

Add an event to this calendar.

Times are shown in UTC/GMT.

Add an event to this calendar.

PEP: 282
Title: A Logging System
Version: 2e756bfa690f
Last-Modified:  2006-03-31 10:38:30 +0000 (Fri, 31 Mar 2006)
Author: vinay_sajip at red-dove.com (Vinay Sajip), Trent Mick <trentm at activestate.com>
Status: Final
Type: Standards Track
Created: 4-Feb-2002
Python-Version: 2.3
Post-History: 

Abstract

    This PEP describes a proposed logging package for Python's
    standard library.

    Basically the system involves the user creating one or more logger
    objects on which methods are called to log debugging notes,
    general information, warnings, errors etc.  Different logging
    'levels' can be used to distinguish important messages from less
    important ones.

    A registry of named singleton logger objects is maintained so that

        1) different logical logging streams (or 'channels') exist
           (say, one for 'zope.zodb' stuff and another for
           'mywebsite'-specific stuff)

        2) one does not have to pass logger object references around.

    The system is configurable at runtime.  This configuration
    mechanism allows one to tune the level and type of logging done
    while not touching the application itself.


Motivation

    If a single logging mechanism is enshrined in the standard
    library, 1) logging is more likely to be done 'well', and 2)
    multiple libraries will be able to be integrated into larger
    applications which can be logged reasonably coherently.


Influences

    This proposal was put together after having studied the
    following logging packages:

        o java.util.logging in JDK 1.4 (a.k.a. JSR047) [1]
        o log4j [2]
        o the Syslog package from the Protomatter project [3]
        o MAL's mx.Log package [4]


Simple Example

    This shows a very simple example of how the logging package can be
    used to generate simple logging output on stderr.

        --------- mymodule.py -------------------------------
        import logging
        log = logging.getLogger("MyModule")

        def doIt():
                log.debug("Doin' stuff...")
                #do stuff...
                raise TypeError, "Bogus type error for testing"
        -----------------------------------------------------

        --------- myapp.py ----------------------------------
        import mymodule, logging

        logging.basicConfig()

        log = logging.getLogger("MyApp")

        log.info("Starting my app")
        try:
                mymodule.doIt()
        except Exception, e:
                log.exception("There was a problem.")
        log.info("Ending my app")
        -----------------------------------------------------

        % python myapp.py

        INFO:MyApp: Starting my app
        DEBUG:MyModule: Doin' stuff...
        ERROR:MyApp: There was a problem.
        Traceback (most recent call last):
                File "myapp.py", line 9, in ?
                        mymodule.doIt()
                File "mymodule.py", line 7, in doIt
                        raise TypeError, "Bogus type error for testing"
        TypeError: Bogus type error for testing

        INFO:MyApp: Ending my app

        The above example shows the default output format.  All
        aspects of the output format should be configurable, so that
        you could have output formatted like this:

        2002-04-19 07:56:58,174 MyModule   DEBUG - Doin' stuff...

        or just

        Doin' stuff...


Control Flow

    Applications make logging calls on *Logger* objects.  Loggers are
    organized in a hierarchical namespace and child Loggers inherit
    some logging properties from their parents in the namespace.

    Logger names fit into a "dotted name" namespace, with dots
    (periods) indicating sub-namespaces.  The namespace of logger
    objects therefore corresponds to a single tree data structure.

       "" is the root of the namespace
       "Zope" would be a child node of the root
       "Zope.ZODB" would be a child node of "Zope"

    These Logger objects create *LogRecord* objects which are passed
    to *Handler* objects for output.  Both Loggers and Handlers may
    use logging *levels* and (optionally) *Filters* to decide if they
    are interested in a particular LogRecord.  When it is necessary to
    output a LogRecord externally, a Handler can (optionally) use a
    *Formatter* to localize and format the message before sending it
    to an I/O stream.

    Each Logger keeps track of a set of output Handlers.  By default
    all Loggers also send their output to all Handlers of their
    ancestor Loggers.  Loggers may, however, also be configured to
    ignore Handlers higher up the tree.

    The APIs are structured so that calls on the Logger APIs can be
    cheap when logging is disabled.  If logging is disabled for a
    given log level, then the Logger can make a cheap comparison test
    and return.  If logging is enabled for a given log level, the
    Logger is still careful to minimize costs before passing the
    LogRecord into the Handlers.  In particular, localization and
    formatting (which are relatively expensive) are deferred until the
    Handler requests them.

    The overall Logger hierarchy can also have a level associated with
    it, which takes precedence over the levels of individual Loggers.
    This is done through a module-level function:

        def disable(lvl):
            """
            Do not generate any LogRecords for requests with a severity less
            than 'lvl'.
            """
            ...


Levels

    The logging levels, in increasing order of importance, are:

        DEBUG
        INFO
        WARN
        ERROR
        CRITICAL

    The term CRITICAL is used in preference to FATAL, which is used by
    log4j.  The levels are conceptually the same - that of a serious,
    or very serious, error.  However, FATAL implies death, which in
    Python implies a raised and uncaught exception, traceback, and
    exit.  Since the logging module does not enforce such an outcome
    from a FATAL-level log entry, it makes sense to use CRITICAL in
    preference to FATAL.

    These are just integer constants, to allow simple comparison of
    importance.  Experience has shown that too many levels can be
    confusing, as they lead to subjective interpretation of which
    level should be applied to any particular log request.

    Although the above levels are strongly recommended, the logging
    system should not be prescriptive.  Users may define their own
    levels, as well as the textual representation of any levels.  User
    defined levels must, however, obey the constraints that they are
    all positive integers and that they increase in order of
    increasing severity.

    User-defined logging levels are supported through two module-level
    functions:

        def getLevelName(lvl):
                """Return the text for level 'lvl'."""
                ...

        def addLevelName(lvl, lvlName):
                """
                Add the level 'lvl' with associated text 'levelName', or
                set the textual representation of existing level 'lvl' to be
                'lvlName'."""
                ...

Loggers

    Each Logger object keeps track of a log level (or threshold) that
    it is interested in, and discards log requests below that level.

    A *Manager* class instance maintains the hierarchical namespace of
    named Logger objects.  Generations are denoted with dot-separated
    names: Logger "foo" is the parent of Loggers "foo.bar" and
    "foo.baz".

    The Manager class instance is a singleton and is not directly
    exposed to users, who interact with it using various module-level
    functions.

    The general logging method is:

        class Logger:
            def log(self, lvl, msg, *args, **kwargs):
                """Log 'str(msg) % args' at logging level 'lvl'."""
                ...

    However, convenience functions are defined for each logging level:

        class Logger:
            def debug(self, msg, *args, **kwargs): ...
            def info(self, msg, *args, **kwargs): ...
            def warn(self, msg, *args, **kwargs): ...
            def error(self, msg, *args, **kwargs): ...
            def critical(self, msg, *args, **kwargs): ...

    Only one keyword argument is recognized at present - "exc_info".
    If true, the caller wants exception information to be provided in
    the logging output.  This mechanism is only needed if exception
    information needs to be provided at *any* logging level.  In the
    more common case, where exception information needs to be added to
    the log only when errors occur, i.e. at the ERROR level, then
    another convenience method is provided:

        class Logger:
            def exception(self, msg, *args): ...

    This should only be called in the context of an exception handler,
    and is the preferred way of indicating a desire for exception
    information in the log.  The other convenience methods are
    intended to be called with exc_info only in the unusual situation
    where you might want to provide exception information in the
    context of an INFO message, for example.

    The "msg" argument shown above will normally be a format string;
    however, it can be any object x for which str(x) returns the
    format string.  This facilitates, for example, the use of an
    object which fetches a locale- specific message for an
    internationalized/localized application, perhaps using the
    standard gettext module.  An outline example:

        class Message:
            """Represents a message"""
            def __init__(self, id):
                """Initialize with the message ID"""

            def __str__(self):
                """Return an appropriate localized message text"""

        ...

        logger.info(Message("abc"), ...)

    Gathering and formatting data for a log message may be expensive,
    and a waste if the logger was going to discard the message anyway.
    To see if a request will be honoured by the logger, the
    isEnabledFor() method can be used:

        class Logger:
            def isEnabledFor(self, lvl):
                """
                Return true if requests at level 'lvl' will NOT be
                discarded.
                """
                ...

    so instead of this expensive and possibly wasteful DOM to XML
    conversion:

        ...
        hamletStr = hamletDom.toxml()
        log.info(hamletStr)
        ...

    one can do this:

        if log.isEnabledFor(logging.INFO):
            hamletStr = hamletDom.toxml()
            log.info(hamletStr)

    When new loggers are created, they are initialized with a level
    which signifies "no level".  A level can be set explicitly using
    the setLevel() method:

        class Logger:
            def setLevel(self, lvl): ...

    If a logger's level is not set, the system consults all its
    ancestors, walking up the hierarchy until an explicitly set level
    is found.  That is regarded as the "effective level" of the
    logger, and can be queried via the getEffectiveLevel() method:

        def getEffectiveLevel(self): ...

    Loggers are never instantiated directly.  Instead, a module-level
    function is used:

        def getLogger(name=None): ...

    If no name is specified, the root logger is returned.  Otherwise,
    if a logger with that name exists, it is returned.  If not, a new
    logger is initialized and returned.  Here, "name" is synonymous
    with "channel name".

    Users can specify a custom subclass of Logger to be used by the
    system when instantiating new loggers:

        def setLoggerClass(klass): ...

    The passed class should be a subclass of Logger, and its __init__
    method should call Logger.__init__.


Handlers

    Handlers are responsible for doing something useful with a given
    LogRecord.  The following core Handlers will be implemented:

        - StreamHandler: A handler for writing to a file-like object.
        - FileHandler: A handler for writing to a single file or set
          of rotating files.
        - SocketHandler: A handler for writing to remote TCP ports.
        - DatagramHandler: A handler for writing to UDP sockets, for
          low-cost logging.  Jeff Bauer already had such a system [5].
        - MemoryHandler: A handler that buffers log records in memory
          until the buffer is full or a particular condition occurs
          [1].
        - SMTPHandler: A handler for sending to email addresses via SMTP.
        - SysLogHandler: A handler for writing to Unix syslog via UDP.
        - NTEventLogHandler: A handler for writing to event logs on
          Windows NT, 2000 and XP.
        - HTTPHandler: A handler for writing to a Web server with
          either GET or POST semantics.

          Handlers can also have levels set for them using the
          setLevel() method:

              def setLevel(self, lvl): ...


    The FileHandler can be set up to create a rotating set of log
    files.  In this case, the file name passed to the constructor is
    taken as a "base" file name.  Additional file names for the
    rotation are created by appending .1, .2, etc. to the base file
    name, up to a maximum as specified when rollover is requested.
    The setRollover method is used to specify a maximum size for a log
    file and a maximum number of backup files in the rotation.

        def setRollover(maxBytes, backupCount): ...

    If maxBytes is specified as zero, no rollover ever occurs and the
    log file grows indefinitely.  If a non-zero size is specified,
    when that size is about to be exceeded, rollover occurs.  The
    rollover method ensures that the base file name is always the most
    recent, .1 is the next most recent, .2 the next most recent after
    that, and so on.

    There are many additional handlers implemented in the test/example
    scripts provided with [6] - for example, XMLHandler and
    SOAPHandler.


LogRecords

        A LogRecord acts as a receptacle for information about a
        logging event.  It is little more than a dictionary, though it
        does define a getMessage method which merges a message with
        optional runarguments.


Formatters

    A Formatter is responsible for converting a LogRecord to a string
    representation.  A Handler may call its Formatter before writing a
    record.  The following core Formatters will be implemented:

        - Formatter: Provide printf-like formatting, using the % operator.

        - BufferingFormatter: Provide formatting for multiple
          messages, with header and trailer formatting support.

    Formatters are associated with Handlers by calling setFormatter()
    on a handler:

        def setFormatter(self, form): ...

    Formatters use the % operator to format the logging message.  The
    format string should contain %(name)x and the attribute dictionary
    of the LogRecord is used to obtain message-specific data.  The
    following attributes are provided:

    %(name)s            Name of the logger (logging channel)

    %(levelno)s         Numeric logging level for the message (DEBUG,
                        INFO, WARN, ERROR, CRITICAL)

    %(levelname)s       Text logging level for the message ("DEBUG", "INFO",
                        "WARN", "ERROR", "CRITICAL")

    %(pathname)s        Full pathname of the source file where the logging
                        call was issued (if available)

    %(filename)s        Filename portion of pathname

    %(module)s          Module from which logging call was made

    %(lineno)d          Source line number where the logging call was issued
                        (if available)

    %(created)f         Time when the LogRecord was created (time.time()
                        return value)

    %(asctime)s         Textual time when the LogRecord was created

    %(msecs)d           Millisecond portion of the creation time

    %(relativeCreated)d Time in milliseconds when the LogRecord was created,
                        relative to the time the logging module was loaded
                        (typically at application startup time)

    %(thread)d          Thread ID (if available)

    %(message)s         The result of record.getMessage(), computed just as
                        the record is emitted

    If a formatter sees that the format string includes "(asctime)s",
    the creation time is formatted into the LogRecord's asctime
    attribute.  To allow flexibility in formatting dates, Formatters
    are initialized with a format string for the message as a whole,
    and a separate format string for date/time.  The date/time format
    string should be in time.strftime format.  The default value for
    the message format is "%(message)s".  The default date/time format
    is ISO8601.

    The formatter uses a class attribute, "converter", to indicate how
    to convert a time from seconds to a tuple.  By default, the value
    of "converter" is "time.localtime".  If needed, a different
    converter (e.g.  "time.gmtime") can be set on an individual
    formatter instance, or the class attribute changed to affect all
    formatter instances.


Filters

    When level-based filtering is insufficient, a Filter can be called
    by a Logger or Handler to decide if a LogRecord should be output.
    Loggers and Handlers can have multiple filters installed, and any
    one of them can veto a LogRecord being output.

        class Filter:
            def filter(self, record):
                """
                Return a value indicating true if the record is to be
                processed.  Possibly modify the record, if deemed
                appropriate by the filter.
                """

    The default behaviour allows a Filter to be initialized with a
    Logger name.  This will only allow through events which are
    generated using the named logger or any of its children.  For
    example, a filter initialized with "A.B" will allow events logged
    by loggers "A.B", "A.B.C", "A.B.C.D", "A.B.D" etc. but not "A.BB",
    "B.A.B" etc.  If initialized with the empty string, all events are
    passed by the Filter.  This filter behaviour is useful when it is
    desired to focus attention on one particular area of an
    application; the focus can be changed simply by changing a filter
    attached to the root logger.

    There are many examples of Filters provided in [6].


Configuration

    The main benefit of a logging system like this is that one can
    control how much and what logging output one gets from an
    application without changing that application's source code.
    Therefore, although configuration can be performed through the
    logging API, it must also be possible to change the logging
    configuration without changing an application at all.  For
    long-running programs like Zope, it should be possible to change
    the logging configuration while the program is running.

    Configuration includes the following:

        - What logging level a logger or handler should be interested in.
        - What handlers should be attached to which loggers.
        - What filters should be attached to which handlers and loggers.
        - Specifying attributes specific to certain handlers and filters.

    In general each application will have its own requirements for how
    a user may configure logging output.  However, each application
    will specify the required configuration to the logging system
    through a standard mechanism.

    The most simple configuration is that of a single handler, writing
    to stderr, attached to the root logger.  This configuration is set
    up by calling the basicConfig() function once the logging module
    has been imported.

        def basicConfig(): ...

    For more sophisticated configurations, this PEP makes no specific
    proposals, for the following reasons:

        - A specific proposal may be seen as prescriptive.
        - Without the benefit of wide practical experience in the
          Python community, there is no way to know whether any given
          configuration approach is a good one.  That practice can't
          really come until the logging module is used, and that means
          until *after* Python 2.3 has shipped.
        - There is a likelihood that different types of applications
          may require different configuration approaches, so that no
          "one size fits all".

    The reference implementation [6] has a working configuration file
    format, implemented for the purpose of proving the concept and
    suggesting one possible alternative.  It may be that separate
    extension modules, not part of the core Python distribution, are
    created for logging configuration and log viewing, supplemental
    handlers and other features which are not of interest to the bulk
    of the community.


Thread Safety

    The logging system should support thread-safe operation without
    any special action needing to be taken by its users.


Module-Level Functions

    To support use of the logging mechanism in short scripts and small
    applications, module-level functions debug(), info(), warn(),
    error(), critical() and exception() are provided.  These work in
    the same way as the correspondingly named methods of Logger - in
    fact they delegate to the corresponding methods on the root
    logger.  A further convenience provided by these functions is that
    if no configuration has been done, basicConfig() is automatically
    called.

    At application exit, all handlers can be flushed by calling the function

        def shutdown(): ...

    This will flush and close all handlers.


Implementation

    The reference implementation is Vinay Sajip's logging module [6].


Packaging

    The reference implementation is implemented as a single module.
    This offers the simplest interface - all users have to do is
    "import logging" and they are in a position to use all the
    functionality available.


References

    [1] java.util.logging
        http://java.sun.com/j2se/1.4/docs/guide/util/logging/

    [2] log4j: a Java logging package
        http://jakarta.apache.org/log4j/docs/index.html

    [3] Protomatter's Syslog
        http://protomatter.sourceforge.net/1.1.6/index.html
        http://protomatter.sourceforge.net/1.1.6/javadoc/com/protomatter/syslog/syslog-whitepaper.html

    [4] MAL mentions his mx.Log logging module:
        http://mail.python.org/pipermail/python-dev/2002-February/019767.html

    [5] Jeff Bauer's Mr. Creosote
        http://starship.python.net/crew/jbauer/creosote/

    [6] Vinay Sajip's logging module.
        http://www.red-dove.com/python_logging.html


Copyright

    This document has been placed in the public domain.