Python Logging Best Practices 2026: stdlib vs structlog vs loguru with JSON Output
Logging is one of those things developers think they understand until they debug a production incident at 2 AM and realize their logs are either missing, garbled, or impossible to query. This guide covers the full landscape of Python logging in 2026: the standard library's pitfalls and proper usage, structured logging with structlog, ergonomic logging with loguru, and how to wire everything into a modern observability stack.
Sources referenced throughout: Python docs, structlog.org, loguru docs, and the 12-factor app logging principle.
Why print() is Not Acceptable in Production
It is tempting to leave print() statements in application code — they are immediate and require no setup. The problems surface quickly in production:
- No severity levels. You cannot distinguish a debug trace from a critical alert without parsing the message text manually.
- No timestamps. You lose the ability to correlate events across services unless you bolt on your own formatting.
- No destination control.
print()goes to stdout unconditionally. You cannot route errors to stderr, a file, or a log aggregator without monkey-patching. - No structured output. Log aggregators like Loki, Datadog, or CloudWatch expect JSON or key=value pairs they can index. Free-form print output must be parsed with fragile regex.
- No caller context. The logging module captures the module name, function, and line number automatically.
print()gives you nothing. - Thread-safety edge cases. The logging module's handlers are thread-safe by default. Interleaved
print()calls from threads will corrupt output without explicit locks.
The fix is not complicated. The standard library logging module ships with Python and costs nothing to use correctly.
The Root Logger Trap: Why logging.basicConfig() in Libraries Breaks Everything
The most common mistake beginners make is calling logging.basicConfig() at module level inside a library or reusable package:
# BAD — never do this in a library
import logging
logging.basicConfig(level=logging.DEBUG)
Here is what goes wrong. logging.basicConfig() configures the root logger — the ancestor of every logger in the entire process. When your library is imported by an application, your basicConfig() call silently overwrites the application's logging configuration. Every logger in the process, including those belonging to other libraries, will now emit DEBUG-level output to stderr whether the application wants that or not.
The Python logging docs are explicit: libraries should add only a NullHandler to their top-level logger and leave all configuration to the application.
# CORRECT — in a library's __init__.py
import logging
logging.getLogger(__name__).addHandler(logging.NullHandler())
This ensures the library participates in the logging hierarchy without injecting any configuration. The application author decides where logs go, at what level, and in what format.
stdlib Logging the Right Way
Use logging.getLogger(__name__)
Every module that emits logs should create its own logger using __name__ as the identifier:
import logging
logger = logging.getLogger(__name__)
This creates a hierarchy that mirrors your package structure. If your package is myapp.api.users, the logger name is myapp.api.users, which is a child of myapp.api, which is a child of myapp. You can set the level on myapp and all children inherit it unless overridden. This is how you silence entire subsystems with a single configuration line.
Lazy Formatting
Do not format log strings eagerly:
# BAD — string is formatted even if DEBUG is disabled
logger.debug("Processing record: %s" % record)
logger.debug(f"Processing record: {record}")
# GOOD — formatting only happens if the message will be emitted
logger.debug("Processing record: %s", record)
The logging module accepts %-style format strings with arguments as separate parameters. The interpolation is deferred until the handler actually needs to emit the message. At DEBUG level in production (which is typically disabled), this means zero string formatting cost.
Capturing Exceptions with exc_info=True
When catching exceptions, always pass exc_info=True to include the full traceback:
try:
result = process(payload)
except ValueError as exc:
logger.error("Payload processing failed", exc_info=True)
raise
The shorthand logger.exception("message") is equivalent to logger.error("message", exc_info=True) and is the idiomatic choice inside except blocks.
try:
result = process(payload)
except ValueError:
logger.exception("Payload processing failed")
raise
Never swallow exceptions silently. If you catch and do not re-raise, at minimum log with exc_info=True so the traceback is preserved.
dictConfig Production Template
logging.basicConfig() is fine for scripts and quick experiments. Production applications need logging.config.dictConfig(), which gives you complete, declarative control over the logging pipeline.
The following template is copy-pasteable. It configures:
- A rotating file handler that writes JSON (via
python-json-logger) - A console handler with human-readable output for local development
- Silenced noisy third-party loggers
- A root logger and an app-specific logger
# logging_config.py
import logging.config
import os
LOG_LEVEL = os.environ.get("LOG_LEVEL", "INFO").upper()
LOGGING_CONFIG = {
"version": 1,
"disable_existing_loggers": False, # preserve third-party loggers
"formatters": {
"json": {
"()": "pythonjsonlogger.jsonlogger.JsonFormatter",
"format": "%(asctime)s %(name)s %(levelname)s %(message)s",
"datefmt": "%Y-%m-%dT%H:%M:%S",
},
"console": {
"format": "%(asctime)s [%(levelname)s] %(name)s: %(message)s",
"datefmt": "%Y-%m-%dT%H:%M:%S",
},
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"stream": "ext://sys.stdout",
"formatter": "console",
"level": LOG_LEVEL,
},
"json_file": {
"class": "logging.handlers.RotatingFileHandler",
"filename": "logs/app.log",
"maxBytes": 10 * 1024 * 1024, # 10 MB per file
"backupCount": 5,
"formatter": "json",
"level": LOG_LEVEL,
"encoding": "utf-8",
},
},
"loggers": {
# Application root — adjust "myapp" to your package name
"myapp": {
"handlers": ["console", "json_file"],
"level": LOG_LEVEL,
"propagate": False,
},
# Silence noisy third-party libraries
"uvicorn.access": {
"handlers": ["console"],
"level": "WARNING",
"propagate": False,
},
"httpx": {
"level": "WARNING",
"propagate": True,
},
"boto3": {
"level": "WARNING",
"propagate": True,
},
"botocore": {
"level": "WARNING",
"propagate": True,
},
},
"root": {
"handlers": ["console"],
"level": "WARNING",
},
}
def configure_logging() -> None:
import os
os.makedirs("logs", exist_ok=True)
logging.config.dictConfig(LOGGING_CONFIG)
Call configure_logging() once at application startup, before any other imports that might trigger logging:
# main.py or app/__init__.py
from logging_config import configure_logging
configure_logging()
Install the JSON formatter with:
pip install python-json-logger
Silencing Noisy Library Loggers
The dictConfig template above already handles uvicorn, httpx, boto3, and botocore. The key insight is "propagate": False on your application logger combined with a WARNING-only level on the noisy ones. You can add any library:
"loggers": {
"sqlalchemy.engine": {"level": "WARNING", "propagate": True},
"aiohttp.access": {"level": "WARNING", "propagate": True},
"paramiko": {"level": "WARNING", "propagate": True},
}
Setting "propagate": True without a handler means the message travels up to the root logger, where your root handler applies. Setting "level": "WARNING" on the library logger gates what reaches the root in the first place.
structlog: Structured Logging That Travels With Context
structlog is the go-to library for structured logging in Python. Its central idea is the bound logger: a logger that carries a dictionary of context fields that are automatically included in every subsequent log call.
Install:
pip install structlog
Basic Configuration
# structlog_config.py
import logging
import structlog
def configure_structlog(json_output: bool = False) -> None:
shared_processors = [
structlog.contextvars.merge_contextvars,
structlog.stdlib.add_logger_name,
structlog.stdlib.add_log_level,
structlog.processors.TimeStamper(fmt="iso"),
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
]
if json_output:
renderer = structlog.processors.JSONRenderer()
else:
renderer = structlog.dev.ConsoleRenderer()
structlog.configure(
processors=shared_processors + [renderer],
wrapper_class=structlog.make_filtering_bound_logger(logging.INFO),
context_class=dict,
logger_factory=structlog.PrintLoggerFactory(),
cache_logger_on_first_use=True,
)
Call this at startup. In development set json_output=False for readable console output. In production set json_output=True for machine-parseable JSON.
bind() — Context That Travels With the Logger
The killer feature of structlog is bind(). It returns a new logger with additional fields baked in:
import structlog
logger = structlog.get_logger()
# Bind fields once, they appear in every subsequent call on this logger
order_logger = logger.bind(order_id="ORD-9921", user_id=42)
order_logger.info("order_received")
order_logger.info("payment_processing", provider="stripe", amount=99.99)
order_logger.error("payment_failed", error_code="card_declined")
Output (JSON mode):
{"order_id": "ORD-9921", "user_id": 42, "event": "order_received", "level": "info", "timestamp": "2026-05-12T10:00:00Z"}
{"order_id": "ORD-9921", "user_id": 42, "event": "payment_processing", "provider": "stripe", "amount": 99.99, "level": "info", "timestamp": "2026-05-12T10:00:01Z"}
{"order_id": "ORD-9921", "user_id": 42, "event": "payment_failed", "error_code": "card_declined", "level": "error", "timestamp": "2026-05-12T10:00:02Z"}
Every log line carries the full context without any repetition in the call sites. This makes log aggregator queries trivial: order_id="ORD-9921" returns the complete timeline for that order.
ConsoleRenderer vs JSONRenderer
| Renderer | Best for | Output |
|---|---|---|
ConsoleRenderer | Local development | Colorized, aligned human-readable text |
JSONRenderer | Production / log aggregators | One JSON object per line (NDJSON) |
Switch between them via an environment variable:
import os
configure_structlog(json_output=os.environ.get("ENV") == "production")
Context Variables for Request-Scoped Data
structlog integrates with Python's contextvars module. Fields bound via structlog.contextvars.bind_contextvars() are automatically available in all loggers within the same async task or thread — without passing the logger object around.
import structlog
structlog.contextvars.bind_contextvars(request_id="req-abc123")
# All subsequent log calls in this coroutine/thread see request_id
logger = structlog.get_logger()
logger.info("handling_request") # includes request_id automatically
This is the foundation for the FastAPI middleware shown later.
loguru: Logging Without the Boilerplate
loguru takes a different philosophy: eliminate configuration complexity entirely. There is one pre-configured logger object ready to use after import, and customization is done by removing the default sink and adding new ones.
Install:
pip install loguru
Basic Setup
from loguru import logger
# Remove the default stderr sink
logger.remove()
# Add a stdout sink with custom format and level from environment
import sys, os
logger.add(
sys.stdout,
level=os.environ.get("LOG_LEVEL", "INFO"),
format="{time:YYYY-MM-DDTHH:mm:ss} | {level:<8} | {name}:{line} - {message}",
colorize=True,
)
File Rotation and Retention
loguru handles file rotation and retention in a single add() call:
logger.add(
"logs/app.log",
level="INFO",
rotation="10 MB", # rotate when file reaches 10 MB
retention="30 days", # delete rotated files older than 30 days
compression="gz", # compress rotated files
serialize=True, # emit JSON instead of formatted text
enqueue=True, # thread/process safe async writing
)
serialize=True produces one JSON object per line, suitable for log aggregators.
@logger.catch — Exception Decorator
@logger.catch is one of loguru's most convenient features. It wraps a function and automatically logs any unhandled exception with a full traceback, then re-raises:
from loguru import logger
@logger.catch
def process_batch(records: list) -> None:
for record in records:
transform(record)
If transform() raises, loguru logs the full traceback including local variable values (when diagnose=True, which is the default in development). It can also be used as a context manager:
with logger.catch():
risky_operation()
JSON Output with serialize=True
When serialize=True is passed to logger.add(), every log record is emitted as a structured JSON object:
{
"text": "2026-05-12T10:00:00 | INFO | myapp.orders:42 - order received\n",
"record": {
"elapsed": {"repr": "0:00:01.234", "seconds": 1.234},
"exception": null,
"extra": {"order_id": "ORD-9921"},
"file": {"name": "orders.py", "path": "/app/myapp/orders.py"},
"function": "handle_order",
"level": {"icon": "ℹ", "name": "INFO", "no": 20},
"line": 42,
"message": "order received",
"module": "orders",
"name": "myapp.orders",
"process": {"id": 1234, "name": "MainProcess"},
"thread": {"id": 140234567890, "name": "MainThread"},
"time": {"repr": "2026-05-12T10:00:00+00:00", "timestamp": 1747044000.0}
}
}
Binding Context with logger.bind()
loguru also supports context binding:
request_logger = logger.bind(request_id="req-abc123", user_id=42)
request_logger.info("Request received")
request_logger.warning("Rate limit approaching", remaining=5)
Comparison Table
| Feature | stdlib | structlog | loguru |
|---|---|---|---|
| Ships with Python | Yes | No | No |
| JSON output built-in | No (needs python-json-logger) | Yes (JSONRenderer) | Yes (serialize=True) |
| Structured key=value fields | No | Yes (core feature) | Yes (bind) |
| File rotation built-in | Yes (RotatingFileHandler) | Delegates to stdlib | Yes (rotation=) |
| Retention / compression | No | No | Yes |
| Exception decorator | No | No | Yes (@logger.catch) |
| Async / thread safe | Yes | Yes | Yes (enqueue=True) |
| Context propagation | Manual (LoggerAdapter) | contextvars integration | bind() / contextvars |
| Configuration complexity | High (dictConfig) | Medium | Low |
| Learning curve | High | Medium | Low |
| stdlib compatibility | Native | Full integration | Partial (intercept) |
| Best for | Libraries | Large microservices | New applications |
Opinionated Recommendation
Writing a library or reusable package? Use stdlib only. Add a NullHandler to your package logger. Do not pull in third-party logging dependencies. Let the application author choose their logging stack.
Building a new application (API, CLI, data pipeline)? Use loguru. The setup is three lines, rotation and retention are built in, @logger.catch saves you from boilerplate try/except, and serialize=True gives you production-ready JSON output immediately.
Building large microservices or a platform with many teams? Use structlog. Its processor pipeline is composable and testable. The contextvars integration makes request-scoped logging seamless in async frameworks. The bind() API enforces structured logging discipline across a large codebase. It also integrates cleanly with stdlib so existing code migrates incrementally.
FastAPI Middleware: Add request_id to All Logs with structlog
The following middleware adds a unique request_id to the structlog context at the start of each request. Every log call within that request — regardless of which module emits it — will automatically include the request_id.
# middleware/logging.py
import uuid
import time
import structlog
from starlette.middleware.base import BaseHTTPMiddleware
from starlette.requests import Request
from starlette.responses import Response
logger = structlog.get_logger(__name__)
class RequestLoggingMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request: Request, call_next) -> Response:
request_id = request.headers.get("X-Request-ID", str(uuid.uuid4()))
start_time = time.perf_counter()
# Bind to contextvars — visible in all loggers for this request
structlog.contextvars.clear_contextvars()
structlog.contextvars.bind_contextvars(
request_id=request_id,
method=request.method,
path=request.url.path,
)
logger.info("request_started")
try:
response = await call_next(request)
except Exception:
logger.exception("request_failed")
raise
finally:
duration_ms = (time.perf_counter() - start_time) * 1000
logger.info(
"request_completed",
status_code=response.status_code,
duration_ms=round(duration_ms, 2),
)
response.headers["X-Request-ID"] = request_id
return response
Register the middleware in your FastAPI application:
# main.py
from fastapi import FastAPI
from middleware.logging import RequestLoggingMiddleware
from structlog_config import configure_structlog
import os
configure_structlog(json_output=os.environ.get("ENV") == "production")
app = FastAPI()
app.add_middleware(RequestLoggingMiddleware)
Now every log line emitted during a request automatically carries request_id, method, and path. In a log aggregator, querying request_id="req-abc123" gives you the complete trace of that request across every layer of your application.
Testing Log Output with pytest caplog
pytest's built-in caplog fixture captures log records emitted during a test. It works with stdlib logging and, with a small bridge, with structlog.
Testing stdlib logging
# test_orders.py
import logging
from myapp.orders import process_order
def test_process_order_logs_warning_on_empty_payload(caplog):
with caplog.at_level(logging.WARNING, logger="myapp.orders"):
process_order(payload={})
assert len(caplog.records) == 1
record = caplog.records[0]
assert record.levelname == "WARNING"
assert "empty payload" in record.message
Testing structlog
structlog provides a testing module with capture_logs(), a context manager that returns a list of captured log entries as plain dicts:
# test_orders_structlog.py
import structlog.testing
from myapp.orders import process_order
def test_process_order_emits_structured_log():
with structlog.testing.capture_logs() as captured:
process_order(payload={"item_id": 1, "quantity": 2})
assert len(captured) == 1
assert captured[0]["event"] == "order_processed"
assert captured[0]["item_id"] == 1
assert captured[0]["log_level"] == "info"
Testing loguru
Loguru requires a small fixture to capture output. The simplest approach is to add a sink that writes to a StringIO buffer:
# conftest.py
import pytest
from loguru import logger
import sys
from io import StringIO
@pytest.fixture
def log_output():
buffer = StringIO()
handler_id = logger.add(buffer, format="{level}:{message}", level="DEBUG")
yield buffer
logger.remove(handler_id)
# test_orders_loguru.py
def test_process_order_logs_info(log_output):
from myapp.orders import process_order
process_order(payload={"item_id": 1})
output = log_output.getvalue()
assert "INFO" in output
assert "order_processed" in output
Integration with Sentry
Sentry is the de facto standard for exception tracking in Python applications. The Sentry SDK integrates with the logging module automatically: any ERROR or above log record is captured as a Sentry event.
Install:
pip install sentry-sdk
Basic Sentry Setup
import sentry_sdk
from sentry_sdk.integrations.logging import LoggingIntegration
import logging
sentry_sdk.init(
dsn="https://[email protected]/your-project-id",
integrations=[
LoggingIntegration(
level=logging.INFO, # capture INFO and above as breadcrumbs
event_level=logging.ERROR, # send ERROR and above as Sentry events
),
],
traces_sample_rate=0.1, # 10% of transactions for performance monitoring
environment="production",
release="[email protected]", # tie errors to a specific release
)
After this, every logger.error() or logger.exception() call automatically sends an event to Sentry with the full traceback, local variables (in debug mode), and any extra fields you attach.
structlog + Sentry
structlog does not emit through stdlib by default in the configuration shown above (it uses PrintLoggerFactory). To route structlog events through stdlib — and thus into Sentry — switch to stdlib integration:
import structlog
import logging
structlog.configure(
processors=[
structlog.contextvars.merge_contextvars,
structlog.stdlib.add_logger_name,
structlog.stdlib.add_log_level,
structlog.stdlib.PositionalArgumentsFormatter(),
structlog.processors.TimeStamper(fmt="iso"),
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
],
logger_factory=structlog.stdlib.LoggerFactory(),
wrapper_class=structlog.stdlib.BoundLogger,
cache_logger_on_first_use=True,
)
With this configuration, structlog routes through stdlib, and Sentry's LoggingIntegration picks up error-level events automatically.
loguru + Sentry
loguru requires an explicit sink to forward to Sentry:
from loguru import logger
import sentry_sdk
def sentry_sink(message):
record = message.record
if record["level"].no >= 40: # ERROR and above
sentry_sdk.capture_message(
record["message"],
level=record["level"].name.lower(),
)
if record["exception"]:
sentry_sdk.capture_exception(record["exception"].value)
logger.add(sentry_sink)
Summary
Effective Python logging in 2026 is not about picking the most feature-rich library — it is about matching the tool to the context and using it correctly:
- Never use
print()in production code. Use a logger with levels, context, and structured output. - Never call
logging.basicConfig()in libraries. Add only aNullHandlerand let the application configure logging. - Always use
logging.getLogger(__name__)in modules, and pass format arguments separately for lazy interpolation. - Use
dictConfigfor production stdlib configuration. The template in this article is a working starting point. - Silence noisy third-party loggers by name — uvicorn, httpx, boto3, sqlalchemy — in your
dictConfigor logurufilter. - Use structlog when you need composable processors, first-class structured fields, and request-scoped context in async microservices.
- Use loguru when you want minimal setup, built-in rotation and retention, and the
@logger.catchdecorator. - Test your logs.
caplog,structlog.testing.capture_logs(), and loguru's custom sinks make logging testable with zero ceremony. - Integrate Sentry for automatic exception tracking — one
sentry_sdk.init()call and your error logs become actionable alerts.
Consistent, structured, and testable logging is what separates applications that are debuggable from applications that are guesswork. The tools are mature, the setup is not complex, and the payoff at 2 AM is everything.