Python Logging Guide Part 1: The Basics


Once your Python programs grow beyond basic scripts run from a command line, using print() statements for logging becomes a difficult practice to scale. Using logging modules enable you to better control where, how, and what you log, with much more granularity. As a result, you can reduce debugging time, improve code quality, and increase the visibility of your infrastructure.   

To help you get up to speed with Python logging, we’re creating a multi-part guide to cover what you need to know to make your Python logging efficient, useful, and scalable. To get the most out of this guide, you should be comfortable with basic Python programming and understand general logging best practices

In this first post, Part One of our overview on Python logging, we’ll introduce you to the default logging module and log levels, and we’ll walk through basic examples of how you can get started with Python logging.

Python’s Default Logging Module

The first step in understanding Python logging is familiarizing yourself with the default logging module, which is included with Python’s standard library. The default logging module provides an easy-to-use framework for emitting log messages in a Python program. It’s simple enough that you can hit the ground running in a few minutes and extensible enough to cover a variety of use cases. 

With the default Python logging module, you can:

  • Create custom log messages with timestamps  

  • Emit logs to different destinations (such as the terminal, syslog, or systemd)

  • Define the severity of log messages

  • Format logs to meet different requirements 

  • Report error suppression without throwing an exception 

  • Capture the source of log messages

How Does Python’s Default Logging Module Work?

At a high level, Python’s default logging module consists of these components:

  • Loggers expose an interface that your code can use to log messages. 

  • Handlers send the logs created by loggers to their destination. Popular handlers include:

    • FileHandler: For sending log messages to a file

    • StreamHandler: For sending log messages to an output stream like stdout 

    • SyslogHandler: For sending log messages to a syslog daemon 

    • HTTPHandler: For sending log messages with HTTP

  • Filters provide a mechanism to determine which logs are recorded.

  • Formatters determine the output formatting of log messages. 

To use the default logger, just add  import logging to your Python program, and then create a log message. 

Here’s a basic example that uses the default logger (also known as the root logger):

Running that code will print this message to the console:

WARNING:root:You are learning Python logging!

In that example, we can see the default message format is as follows:


<NAME> is the name of our logger.

In many cases, we’ll want to modify how messages are formatted. We can call basicConfig() at the beginning of our code to customize formatting for the root logger.

For example, suppose we want to add a timestamp to our message. We can add %(asctime)s to a basicConfig() format call. To retain the rest of our original formatting, we’ll also need to include %(levelname)s:%(name)s:%(message)s.

Our resulting code will look like this: 

# Import the default logging module 

import logging

# Format the log message
logging.basicConfig(format='%(asctime)s %(levelname)s:%(name)s:%(message)s')

# Emit a warning message
logging.warning('You are learning Python logging!')

The output should look similar to the following:

2022-11-11 11:11:51,994 WARNING:root:You are learning Python logging!

Creating a Custom Logger

What if we don’t want to use the root logger?

In that case, we can create our own logger by setting a logger = value and defining the settings of our logger (remember, basicConfig() is only for the root logger). For example, the script below creates a HumioDemoLogger set to log INFO-level messages with the same formatting as our previous example.

# Import the default logging moduleimport logging

# Create our demo logger
logger = logging.getLogger('HumioDemoLogger')

# Set a log level for the logger

# Create a console handler 
handler = logging.StreamHandler()

# Set INFO level for handler

# Create a message format that matches earlier example
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')

# Add our format to our handlerhandler.set

# Add our handler to our logger

#  Emit an INFO-level message'Python logging is cool!')

When you run the script, the output should look similar to the following:

2022-11-11 11:11:38,525 - HumioDemoLogger - INFO - Python logging is cool!

Python Logging Levels

If you’re familiar with the Syslog protocol, the idea of logging levels and log severity should be intuitive. In short, log messages generally include a severity that indicates the importance of the message.

There are six default severities with the default Python logging module. Each default severity is associated with a number, and a higher numeric value indicates a more severe logging level. The table below describes each of the default logging levels. 

Default Python Logging Levels

It’s important to understand that the logger will log everything at or above the severity it is set to. The default configuration is set to log WARNING-level messages, so let’s see what happens when we create a message with a severity of INFO.

# Import the default logging module
import logging

# Emit a warning message'Keep going, you are doing great!')

When we run our script, we notice that this message, as expected, doesn’t print to the console.

If we want to log INFO-level messages, we can use basicConfig() and set level=logging.INFO.

The new code will look like this:

# Import the default logging module
import logging

# Format the log message

# Emit a warning message'Keep going, you are doing great!')

The output will look similar to the following:

INFO:root:Keep going, you are doing great!

Sending Python Logs to Different Destinations 

Thus far, we’ve emitted our log messages to the console. That’s great for local debugging, but you’ll often need to send logs to other destinations in practice.

Later in our Python Logging Guide, we’ll cover more advanced topics like centralized logging and StreamHandler for Django. For now, we’ll focus on three common use cases:

  1. Logging to a file

  2. Logging to syslog

  3. Logging to systemd-journald

Sending Python logs to a file

If you want your Python app to create a log file, you can use the default logging module and specify a filename in your code. For example, to make our original WARNING-level script write to a file called HumioDemo.log, we add the following line:


The new script should look like this:

# Import the default logging module
import logging

# Set basicConfig() to create a log file

# Emit a warning message
logging.warning('You are learning Python logging!')

Nothing will print to the console when you run that script. Instead, it will create a HumioDemo.log file in the current working directory, and this file will include the log message.

Sending Python logs to syslog

Syslog is a popular mechanism to centralize local and remote logs from applications throughout a system. The default Python logging module includes a SysLogHandler class to send logs to a local or remote syslog server. There’s also a standard syslog module that makes it easy to write to syslog for basic Python use cases.

Here’s a script that uses the standard syslog module:

# Import the standard syslog module
import syslog

# Emit an INFO-level message
syslog.syslog(syslog.LOG_INFO,'Logging an INFO message with the syslog module!')

# Emit a WARNING-level message
syslog.syslog(syslog.LOG_WARNING,'Logging a WARNING message with the syslog module!')

After running that script, you should see messages in the system’s local syslog file. Depending on your system, that file might be /var/log/syslog or /var/log/messages. Log messages will look similar to the following:

Nov 11 11:11:16 localhost Logging an INFO message with the syslog module!
Nov 11 11:11:16 localhost Logging a WARNING message with the syslog module!

Sending Python logs to systemd-journald 

Logging with systemd-journald has several benefits, including:

  • Faster lookups thanks to binary storage

  • Enforced structured logging

  • Automatic log rotation based on journald.conf values.

On most modern Linux systems using systemd, if your Python app runs as a systemd unit, whatever it prints to stdout or stderr will write to systemd-journald. That means all you need to do is send your log output to stdout or stderr. 

In addition to modules included with the standard Python library, the python-systemd library and wrappers like the Python systemd wrapper help streamline the process of sending Python logs to systemd-journald.

For example, to use python-systemd, first install it using your system’s package manager. Then add the following line to your code:

from systemd import journal

Here’s a simple Python script that writes a WARNING-level message to journald. 

import logging
from systemd import journal
logger = logging.getLogger('humioDemoLogger')
logger.warning("logging is easy!")

After running the above script, we run journalctl and see output similar to:

Nov 11 11:11:57 localhost[2111]: logging is easy!

Best Practices for Emitting Python Logs

At this point, you should be able to implement basic logging for your Python applications. However, there is plenty more to learn about the standard logging module. Reading PEP 282, the official Advanced Tutorial, and Logging Cookbook are great ways to dive deeper.

As you progress, keep in mind the following best practices:

1. Include timestamps with your messages

When is a critical part of an event. Therefore, you should include a timestamp with every message you emit. With the default logging module, you can add a timestamp to your formatter, as we did with  %(asctime)s in our earlier example. You can further customize it using formatTime.

2. Have a mechanism to rotate logs

If you store logs on disk, then have a log rotation strategy to avoid disk space issues. With the default Python logging module, consider using the RotatingFileHandler class.

3. Don’t instantiate logging modules directly

Instead of instantiating logging modules directly, use logging.getLogger(name). The default module naming hierarchy is similar to Python’s package hierarchy, and it’s exactly the same if you name loggers after their corresponding modules, as the docs recommend

4. Centralize your logs

Multiple log files scattered across multiple systems can become almost as unwieldy as those print() statements we originally wanted to get rid of. Centralizing your logs for parsing and analysis gives you observability at scale.

What’s next?

You now know the basics of Python logging. In Part Two, we’ll explore more advanced topics such as:

  • Configuring multiple loggers

  • Understanding exceptions and tracebacks

  • Structured vs unstructured data, and why it matters

  • Using python-json-logger

If you’d like to learn more about logging strategy, check out our Advanced Log Management course Spring ‘22.