Logging of python built-in modules

Article directory

    • 1 Start using logging
      • 1.1 The first program
      • 1.2 Log level
      • 1.3 Output format
    • 2 output log to file
      • 2.1 Use basicConfig to configure the file path
      • 2.2 logging modular design
      • 2.3 Automatically split log files

Recently, due to a small requirement, it is necessary to save the log to a file. Because usually only use print for debugging, and you have to delete print when you don’t need it, which is very inconvenient, and it can only output the error message to the console. So I checked it on the Internet. Python has a built-in module logging, which is used to output log information and can be configured in various ways. The following are some personal summaries, mainly summarizing my own learning, and I hope it can be helpful to you.

1 Start using logging

1.1 The first program

First the easiest to use:

# -*- coding: utf-8 -*-
import logging

logging.debug('debug level, generally used to print some debugging information, the lowest level')
logging.info('info level, generally used to print some normal operation information')
logging.warning('waring level, generally used to print warning information')
logging.error('error level, generally used to print some error messages')
logging.critical('critical level, generally used to print some fatal error messages, the highest level')

In this way, the log information can be output directly on the console:

WARNING:root:waring level, generally used to print warning information
ERROR:root:error level, generally used to print some error messages
CRITICAL: root: critical level, generally used to print some fatal error messages, the highest level

1.2 Log level

You will find that only the following three pieces of information are output. This is because logging is divided into levels, and the information of the above five levels increases sequentially from top to bottom. You can set the level of logging to only print information above a certain level. Since the default level is WARNING, only logs with a level above WARNING are printed.
If we want to print debug and info as well, we can use basicConfig to configure it:

logging.basicConfig(level=logging.DEBUG)

In this way, the output of the console will contain all the above 5 pieces of information.

The log level is not only available in python. Basically, the logs are divided into levels, which allows us to focus on different points in different periods. For example, we output some debugging information at the debug level, and set the logging level to DEBUG, so that when we don’t need to display these logs in the future, we only need to set the level to info or higher, and we don’t need to comment out or delete that statement like print.

1.3 Output format

We found that the above log output information is very brief and cannot meet our needs for the time being. For example, we may need to output the time, location, etc. of this information, which can also be configured through basicConfig.

logging.basicConfig(format='%(asctime)s - %(filename)s[line:%(lineno)d] - %(levelname)s: %(message)s',
                    level=logging. DEBUG)

Then the output will be in this format:

2019-07-19 15:54:26,625 - log_test.py[line:11] - DEBUG: debug level, generally used to print some debugging information, the lowest level

format can specify the content and format of the output, and its built-in parameters are as follows:

%(name)s: Logger name
%(levelno)s: Print the value of the log level
%(levelname)s: print the name of the log level
%(pathname)s: Print the path of the currently executing program, which is actually sys.argv[0]
%(filename)s: Print the name of the currently executing program
%(funcName)s: Print the current function of the log
%(lineno)d: Print the current line number of the log
%(asctime)s: the time to print the log
%(thread)d: print thread ID
%(threadName)s: print thread name
%(process)d: print process ID
%(message)s: print log information

In addition, basicConfig can also perform many other configurations, which will be introduced later.

2 output log to file

2.1 Use basicConfig configuration file path

In the above, we just output the log to the console, but in many cases we may need to save the log to a file, so that when there is a problem with the program, it is convenient for us to locate it according to the log information.
The easiest way is to use basicConfig:

logging.basicConfig(format='%(asctime)s - %(filename)s[line:%(lineno)d] - %(levelname)s: %(message)s',
                    level=logging. DEBUG,
                    filename='test.log',
                    filemode='a')

Just add filename and filemode parameters on the basis of the above configuration, so that the log can be output to the test.log file, if there is no such file, it will be created automatically .
The parameter filemode indicates the file opening mode. If not set, the default is ‘a’, that is, the append mode, which can not be set; it can also be set to ‘w’, and the previous log will be overwritten every time the log is written.
But after performing such an operation, we will find that the console does not output. How to output to the console and write to the file?
This requires further study.

2.2 Logging modular design

Above we just use logging to perform very simple operations, but this is limited. In fact, the logging library adopts a modular design and provides many components: loggers, processors, filters, and formatters.

  • Logger exposes an interface that application code can use directly.
  • Handler sends log records (produced by the logger) to the appropriate destination.
  • Filter provides finer-grained control over which log records to output.
  • Formatter specifies the content and format of the log records in the final output.

Simply put, Logger is responsible for recording log messages, and then we need to put these log messages to Handler for processing, and Filter helps us filter information (not limited to filtering by level), Formatter is the same as the above format Meaning, used to set the log content and format.

In this way, let’s try to use the module to re-log:

logger = logging. getLogger('test')

logger.debug('debug level, generally used to print some debugging information, the lowest level')
logger.info('info level, generally used to print some normal operation information')
logger.warning('waring level, generally used to print warning information')
logger.error('error level, generally used to print some error messages')
logger.critical('critical level, generally used to print some fatal error messages, the highest level')

First, the first line getLogger gets a logger, and the name identifies the Logger. Then the following output method is very similar to our logging usage at the beginning, it seems very simple. But this is not possible, and an error will be reported after running:

No handlers could be found for logger "test"

It means that we have not specified a handler for this logger, and it does not know how to process the log and where to output it. Then let’s add a Handler to him. There are many types of Handlers, and there are 4 commonly used ones:

  • logging.StreamHandler -> console output
  • logging.FileHandler -> file output
  • logging.handlers.RotatingFileHandler -> Automatically split the log file according to the size, and regenerate the file once the specified size is reached
  • logging.handlers.TimedRotatingFileHandler -> Automatically split log files according to time

Now let’s use the simplest StreamHandler to output the log to the console:

logger = logging. getLogger('test')

stream_handler = logging. StreamHandler()
logger. addHandler(stream_handler)
...

In this way, you can see in the console:

waring level, generally used to print warning information
error level, generally used to print some error messages
critical level, generally used to print some fatal error messages, the highest level

There are still a few logs missing, because we did not set the log level, we also set the level, and also use the Formatter module to set the output format.

logger = logging. getLogger('test')
logger.setLevel(level=logging.DEBUG)

formatter = logging.Formatter('%(asctime)s - %(filename)s[line:%(lineno)d] - %(levelname)s: %(message)s')
stream_handler = logging. StreamHandler()
stream_handler.setLevel(logging.DEBUG)
stream_handler. setFormatter(formatter)
logger. addHandler(stream_handler)
...

We found that the Formatter is set for the handler, which is easy to understand, because the handler is responsible for where the log is output, so it sets the format for it, not for the logger; then why does the level need to be set twice? Set the logger to tell it which levels of logs to record, and set the handler to tell it which levels of logs to output, which is equivalent to filtering twice. The advantage of this is that when we have multiple log destinations, such as saving to a file and outputting to the console, we can set different levels for them respectively; the level of the logger is filtered first, so the logs filtered by the logger Handler is also unable to log, so you can only change the level of logger and affect all output. The combination of the two makes it easier to manage the level of logging.

With the handler, we can easily output logs to the console and files at the same time:

logger = logging. getLogger('test')
logger.setLevel(level=logging.DEBUG)

formatter = logging.Formatter('%(asctime)s - %(filename)s[line:%(lineno)d] - %(levelname)s: %(message)s')

file_handler = logging. FileHandler('test2. log')
file_handler.setLevel(level=logging.INFO)
file_handler. setFormatter(formatter)

stream_handler = logging. StreamHandler()
stream_handler.setLevel(logging.DEBUG)
stream_handler. setFormatter(formatter)

logger. addHandler(file_handler)
logger. addHandler(stream_handler)

Just need to add one more FileHandler.

2.3 Automatically split log files

Sometimes we need to split log files to facilitate our management. Python provides two processors for us to split files:

  • logging.handlers.RotatingFileHandler -> Automatically split the log file according to the size, and regenerate the file once the specified size is reached
  • logging.handlers.TimedRotatingFileHandler -> Automatically split log files according to time

The method of use is similar to the above Handler, just need to add some parameter configurations, such as when='D' means to divide the file in a day cycle, and the meaning of other parameters can refer to: Python + logging output to the screen , write the log log to the file

from logging import handlers

time_rotating_file_handler = handlers.TimedRotatingFileHandler(filename='rotating_test.log', when='D')
time_rotating_file_handler.setLevel(logging.DEBUG)
time_rotating_file_handler.setFormatter(formatter)

logger. addHandler(time_rotating_file_handler)

If it is changed to when='S', it will be cut in seconds, and the file will be generated after running several times:

The ones without suffix are the latest log files.

Reference article:
Python + logging output to the screen, write the log log to the file
Python standard module –logging

Reprinted: https://blog.csdn.net/Runner1st/article/details/96481954