Python Decorators Tutorial 2026: 8 Production Patterns from Retry to Auth
Decorators are one of Python's most expressive features. Used well, they eliminate boilerplate, enforce cross-cutting concerns uniformly, and keep business logic clean. Used carelessly, they turn a codebase into an inscrutable pile of magic. This tutorial starts from first principles and builds up to eight production-ready decorators you can copy, paste, and adapt today.
Covered in this guide:
- What a decorator is and how
@syntaxmaps to a function call - The
*args, **kwargspassthrough rule — why you must always use it functools.wraps— what breaks without it and how to fix it- Decorators that accept arguments: the three-layer pattern
- Class-based decorators:
__init__,__call__, and stateful use cases - Eight complete, production-quality decorators with real-world motivation
Prerequisites: you should be comfortable with Python functions as first-class objects and closures. Python 3.10+ is assumed throughout.
1. What Is a Decorator?
A decorator is a callable that takes a function (or class) as input and returns a replacement callable. The @decorator syntax is pure syntactic sugar introduced in PEP 318:
@decorator
def my_function():
...
is exactly equivalent to:
def my_function():
...
my_function = decorator(my_function)
That is the entire mechanical definition. Everything else — retry logic, caching, authentication — is just a consequence of what the decorator returns.
The simplest possible decorator does nothing except return the original function:
def noop(func):
return func
@noop
def greet(name):
return f"Hello, {name}"
Slightly more useful: a decorator that prints a message before calling the function.
def announce(func):
def wrapper(*args, **kwargs):
print(f"Calling {func.__name__}")
return func(*args, **kwargs)
return wrapper
@announce
def greet(name):
return f"Hello, {name}"
greet("Alice")
# Calling greet
# 'Hello, Alice'
2. The *args, **kwargs Passthrough Rule
A wrapper function must forward all positional and keyword arguments to the wrapped function unchanged — otherwise you break callers silently. This is non-negotiable:
def announce(func):
def wrapper(*args, **kwargs): # accept anything
print(f"Calling {func.__name__}")
return func(*args, **kwargs) # forward everything
return wrapper
If you write def wrapper(name): instead, the decorator breaks the moment someone adds a second parameter or uses a keyword argument. Always use *args, **kwargs unless you have a compelling specific reason not to.
3. functools.wraps — Why It Is Mandatory
When you replace a function with a wrapper, Python sees the wrapper, not the original. This breaks introspection in ways that cause real bugs in production:
def announce(func):
def wrapper(*args, **kwargs):
print(f"Calling {func.__name__}")
return func(*args, **kwargs)
return wrapper
@announce
def greet(name: str) -> str:
"""Return a greeting for name."""
return f"Hello, {name}"
print(greet.__name__) # wrapper — WRONG
print(greet.__doc__) # None — WRONG
print(greet.__module__) # __main__ — but from wrapper, not greet
Tools that depend on __name__ (logging, pytest, Flask route registration, Sphinx autodoc) will silently report the wrong function. Pydantic and FastAPI inspect __annotations__ on the wrapped callable; losing them breaks request parsing.
The fix is functools.wraps, which copies __name__, __qualname__, __doc__, __dict__, __module__, __annotations__, and __wrapped__ from the original function to the wrapper:
import functools
def announce(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
print(f"Calling {func.__name__}")
return func(*args, **kwargs)
return wrapper
@announce
def greet(name: str) -> str:
"""Return a greeting for name."""
return f"Hello, {name}"
print(greet.__name__) # greet — correct
print(greet.__doc__) # Return a greeting for name.
print(greet.__wrapped__) # <function greet at 0x...> — unwrap chain
__wrapped__ is also used by inspect.unwrap(), which lets debuggers and profilers peel back decorator layers to find the real function. Always use @functools.wraps(func).
Reference: docs.python.org/3/library/functools.html#functools.wraps
4. Decorators with Arguments: The Three-Layer Pattern
Plain decorators take only a function. When you want configurable decorators — @retry(max_attempts=5) — you need one more layer of nesting. The pattern has three levels:
- Outermost function — receives the decorator's arguments, returns the decorator.
- Decorator function — receives the function, returns the wrapper.
- Wrapper function — does the actual work, calls the original function.
import functools
def repeat(times: int): # layer 1: argument receiver
def decorator(func): # layer 2: the decorator
@functools.wraps(func)
def wrapper(*args, **kwargs): # layer 3: the wrapper
for _ in range(times):
result = func(*args, **kwargs)
return result
return wrapper
return decorator
@repeat(times=3)
def say(message: str) -> None:
print(message)
say("hello")
# hello
# hello
# hello
When Python processes @repeat(times=3), it first calls repeat(times=3), which returns decorator. Then it applies decorator to say, which returns wrapper. The name say now points to wrapper.
5. Class-Based Decorators
A class can act as a decorator if it implements __call__. Class-based decorators are the natural choice when a decorator needs to maintain state between calls — a counter, a cache, a lock — because instance variables are the right place for that state.
import functools
class CountCalls:
def __init__(self, func):
functools.update_wrapper(self, func) # equivalent of @functools.wraps for classes
self.func = func
self.call_count = 0
def __call__(self, *args, **kwargs):
self.call_count += 1
print(f"{self.func.__name__} has been called {self.call_count} time(s)")
return self.func(*args, **kwargs)
@CountCalls
def compute(x, y):
return x + y
compute(1, 2) # compute has been called 1 time(s)
compute(3, 4) # compute has been called 2 time(s)
print(compute.call_count) # 2
functools.update_wrapper(self, func) does the same job as @functools.wraps(func) but works on an object rather than a nested function.
For class-based decorators that also accept arguments, __init__ receives the arguments and __call__ receives the function, then returns a wrapper — mirroring the three-layer pattern from the previous section.
6. Production Decorator 1: Retry with Exponential Backoff
Network calls, database queries, and external API requests fail transiently. A retry decorator with exponential backoff is one of the most broadly useful tools in a backend developer's toolkit.
import functools
import logging
import time
logger = logging.getLogger(__name__)
def retry(
max_attempts: int = 3,
backoff_factor: float = 2.0,
initial_delay: float = 0.5,
exceptions: tuple[type[Exception], ...] = (Exception,),
):
"""Retry a function up to *max_attempts* times with exponential backoff.
Args:
max_attempts: Maximum number of total attempts (including the first).
backoff_factor: Multiplier applied to the delay after each failure.
initial_delay: Seconds to wait before the second attempt.
exceptions: Tuple of exception types that trigger a retry.
"""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
delay = initial_delay
last_exc: Exception | None = None
for attempt in range(1, max_attempts + 1):
try:
return func(*args, **kwargs)
except exceptions as exc:
last_exc = exc
if attempt == max_attempts:
logger.error(
"%s failed after %d attempts: %s",
func.__qualname__,
max_attempts,
exc,
)
raise
logger.warning(
"%s attempt %d/%d failed: %s. Retrying in %.2fs.",
func.__qualname__,
attempt,
max_attempts,
exc,
delay,
)
time.sleep(delay)
delay *= backoff_factor
raise last_exc # unreachable, but satisfies type checkers
return wrapper
return decorator
# Usage
import requests
@retry(max_attempts=5, backoff_factor=2.0, initial_delay=0.5, exceptions=(requests.RequestException,))
def fetch_user(user_id: int) -> dict:
response = requests.get(f"https://api.example.com/users/{user_id}", timeout=5)
response.raise_for_status()
return response.json()
Key design decisions: the exceptions parameter restricts which errors are retried — retrying a ValueError is almost never correct. The logger uses __qualname__ which includes the class name for methods (e.g. UserService.fetch_user).
7. Production Decorator 2: TTL Cache / Memoization
For pure functions or functions whose results rarely change, caching is a trivial performance win. Python ships functools.lru_cache for the simplest case:
from functools import lru_cache
@lru_cache(maxsize=256)
def fibonacci(n: int) -> int:
if n < 2:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
lru_cache has no expiry. For real services you need a TTL. Here is a production TTL cache decorator using a thread-safe dictionary:
import functools
import threading
import time
from typing import Any
def ttl_cache(maxsize: int = 128, ttl: float = 300.0):
"""Memoize a function with a time-to-live expiry per cache entry.
Args:
maxsize: Maximum number of entries to store.
ttl: Seconds before a cached result expires.
"""
def decorator(func):
cache: dict[tuple, tuple[Any, float]] = {}
lock = threading.Lock()
@functools.wraps(func)
def wrapper(*args, **kwargs):
key = args + tuple(sorted(kwargs.items()))
now = time.monotonic()
with lock:
if key in cache:
result, ts = cache[key]
if now - ts < ttl:
return result
del cache[key]
result = func(*args, **kwargs)
with lock:
if len(cache) >= maxsize:
# Evict the oldest entry
oldest_key = min(cache, key=lambda k: cache[k][1])
del cache[oldest_key]
cache[key] = (result, time.monotonic())
return result
def cache_clear():
with lock:
cache.clear()
wrapper.cache_clear = cache_clear
wrapper.cache_info = lambda: {"size": len(cache), "maxsize": maxsize, "ttl": ttl}
return wrapper
return decorator
# Usage
@ttl_cache(maxsize=512, ttl=60.0)
def get_config(env: str) -> dict:
"""Fetch configuration — expensive DB read, cached for 60 seconds."""
return fetch_config_from_db(env)
The time.monotonic() clock is immune to system clock adjustments, making it safe for TTL calculations. The cache_clear and cache_info methods mimic the lru_cache API so the decorator is a drop-in upgrade.
8. Production Decorator 3: Execution Timer
Long-running functions are a leading cause of latency spikes. An execution timer decorator logs slow calls automatically, so you get observability without polluting every function body with timing code.
import functools
import logging
import time
logger = logging.getLogger(__name__)
def timed(threshold_ms: float = 500.0):
"""Log a warning when a function exceeds *threshold_ms* milliseconds.
Always logs at DEBUG level. Warns when the threshold is exceeded.
Args:
threshold_ms: Millisecond threshold above which a warning is emitted.
"""
def decorator(func):
module = func.__module__
qualname = func.__qualname__
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
try:
return func(*args, **kwargs)
finally:
elapsed_ms = (time.perf_counter() - start) * 1000
log = logger.warning if elapsed_ms > threshold_ms else logger.debug
log(
"[%s] %s completed in %.2f ms",
module,
qualname,
elapsed_ms,
)
return wrapper
return decorator
# Usage
@timed(threshold_ms=200.0)
def query_orders(customer_id: int) -> list[dict]:
return db.execute("SELECT * FROM orders WHERE customer_id = %s", customer_id)
Using time.perf_counter() gives the highest-resolution timer available on the platform. The finally block ensures the elapsed time is always logged, even when the function raises an exception.
9. Production Decorator 4: In-Process Rate Limiter
Rate limiting at the function level is useful for wrapping third-party API clients that impose per-second or per-minute call limits. This implementation uses a sliding window with a threading.Lock for thread safety.
import functools
import threading
import time
from collections import deque
def rate_limit(calls: int, period: float):
"""Allow at most *calls* invocations per *period* seconds.
Blocks the calling thread until a slot is available rather than raising.
Args:
calls: Maximum number of calls permitted within *period*.
period: Time window in seconds.
"""
def decorator(func):
timestamps: deque[float] = deque()
lock = threading.Lock()
@functools.wraps(func)
def wrapper(*args, **kwargs):
with lock:
now = time.monotonic()
# Remove timestamps outside the current window
while timestamps and now - timestamps[0] >= period:
timestamps.popleft()
if len(timestamps) >= calls:
sleep_for = period - (now - timestamps[0])
if sleep_for > 0:
time.sleep(sleep_for)
# After sleeping, prune again
now = time.monotonic()
while timestamps and now - timestamps[0] >= period:
timestamps.popleft()
timestamps.append(time.monotonic())
return func(*args, **kwargs)
return wrapper
return decorator
# Usage: at most 10 calls per second to an external payments API
@rate_limit(calls=10, period=1.0)
def charge_card(amount_cents: int, token: str) -> dict:
return payments_client.charge(amount_cents, token)
For distributed rate limiting across multiple processes or hosts you would use a Redis-backed approach (e.g. via redis-py with INCR + EXPIRE). The in-process version here is appropriate for single-process services or wrapping clients in worker threads.
10. Production Decorator 5: Runtime Type Validation
Python's type annotations are not enforced at runtime. During development and testing, having a decorator that validates argument types against annotations catches mistakes that would otherwise surface as confusing errors deep in a call stack.
import functools
import inspect
import os
from typing import get_type_hints
def validate_types(func):
"""Validate argument types against annotations at call time.
Enabled when the environment variable VALIDATE_TYPES is set to '1'.
Has zero overhead in production when the variable is unset.
"""
if os.getenv("VALIDATE_TYPES") != "1":
return func # no-op in production
hints = get_type_hints(func)
sig = inspect.signature(func)
@functools.wraps(func)
def wrapper(*args, **kwargs):
bound = sig.bind(*args, **kwargs)
bound.apply_defaults()
for param_name, value in bound.arguments.items():
if param_name in hints:
expected = hints[param_name]
# Skip None for Optional types — a full check would use typing.get_args
if value is None:
continue
# Only check concrete (non-generic) types to keep it simple
if isinstance(expected, type) and not isinstance(value, expected):
raise TypeError(
f"{func.__qualname__}: argument '{param_name}' expected "
f"{expected.__name__}, got {type(value).__name__} ({value!r})"
)
return func(*args, **kwargs)
return wrapper
# Usage
@validate_types
def create_user(username: str, age: int, active: bool = True) -> dict:
return {"username": username, "age": age, "active": active}
# VALIDATE_TYPES=1 python app.py
# create_user("alice", "25") ->
# TypeError: create_user: argument 'age' expected int, got str ('25')
The os.getenv guard means the decorator adds zero overhead when VALIDATE_TYPES is not set — the decorator returns the original function untouched, so there is not even a wrapper call in the hot path.
11. Production Decorator 6: Deprecation Warning
When you rename or replace a function in a library or internal SDK, you want callers to receive a clear deprecation warning pointing them to the replacement — not a silent breakage six months later.
import functools
import warnings
def deprecated(replacement: str | None = None, since: str | None = None):
"""Mark a function as deprecated.
Args:
replacement: The name of the function or method that should be used instead.
since: Version string when the function was deprecated.
"""
def decorator(func):
parts = [f"{func.__qualname__} is deprecated"]
if since:
parts.append(f"since version {since}")
parts.append("and will be removed in a future release.")
if replacement:
parts.append(f"Use {replacement} instead.")
message = " ".join(parts)
@functools.wraps(func)
def wrapper(*args, **kwargs):
warnings.warn(message, DeprecationWarning, stacklevel=2)
return func(*args, **kwargs)
# Update the docstring to inform readers of the source
wrapper.__doc__ = (
f".. deprecated:: {since or 'unknown'}\n"
f" {message}\n\n"
) + (func.__doc__ or "")
return wrapper
return decorator
# Usage
@deprecated(replacement="send_email_v2", since="3.4.0")
def send_email(to: str, subject: str, body: str) -> bool:
...
# Calling send_email(...) produces:
# DeprecationWarning: send_email is deprecated since version 3.4.0 and will be
# removed in a future release. Use send_email_v2 instead.
stacklevel=2 is critical: it makes the warning point to the caller's line of code, not to the wrapper function inside the decorator. Without it, every warning would report the same internal line, making it impossible to find and fix the call site.
12. Production Decorator 7: Singleton
The singleton pattern ensures only one instance of a class is ever created. A class decorator is a clean way to enforce this without requiring the class itself to implement any special metaclass or __new__ logic.
import functools
import threading
def singleton(cls):
"""Ensure only one instance of *cls* is ever created.
Thread-safe: double-checked locking prevents redundant instantiation
under concurrent access.
"""
instances: dict[type, object] = {}
lock = threading.Lock()
@functools.wraps(cls, updated=[]) # updated=[] prevents __dict__ copy
def get_instance(*args, **kwargs):
if cls not in instances:
with lock:
if cls not in instances: # double-checked locking
instances[cls] = cls(*args, **kwargs)
return instances[cls]
return get_instance
# Usage
@singleton
class DatabasePool:
def __init__(self, dsn: str = "postgresql://localhost/app"):
self.dsn = dsn
self._pool = create_pool(dsn) # expensive — only runs once
def acquire(self):
return self._pool.acquire()
pool_a = DatabasePool()
pool_b = DatabasePool()
assert pool_a is pool_b # True — same object
The updated=[] argument to functools.wraps prevents it from copying __dict__ from the class to the wrapper function, which would be incorrect here. The double-checked locking pattern avoids acquiring the lock on every call after the singleton is created.
13. Production Decorator 8: FastAPI Role-Based Auth
FastAPI's dependency injection system works cleanly with decorators. A @require_role decorator that integrates with FastAPI's Depends mechanism lets you protect endpoints with a single line.
import functools
from typing import Callable
from fastapi import Depends, FastAPI, HTTPException, status
from fastapi.security import OAuth2PasswordBearer
# --- Token / user infrastructure ---
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="/auth/token")
async def get_current_user(token: str = Depends(oauth2_scheme)) -> dict:
"""Decode the JWT and return the user payload.
Replace this stub with your actual JWT verification logic.
"""
payload = decode_jwt(token) # your JWT library call here
if not payload:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid or expired token",
headers={"WWW-Authenticate": "Bearer"},
)
return payload
# --- The decorator ---
def require_role(*roles: str) -> Callable:
"""Restrict a FastAPI endpoint to users that hold one of *roles*.
Usage::
@router.get("/admin/users")
@require_role("admin", "superuser")
async def list_users(current_user: dict = Depends(get_current_user)):
...
Args:
*roles: One or more role strings. The user must have at least one.
"""
def decorator(func: Callable) -> Callable:
@functools.wraps(func)
async def wrapper(*args, **kwargs):
# current_user is injected by FastAPI via Depends; find it in kwargs
current_user: dict = kwargs.get("current_user") or {}
user_roles: list[str] = current_user.get("roles", [])
if not any(role in user_roles for role in roles):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail=(
f"Access denied. Required role(s): {', '.join(roles)}. "
f"Your role(s): {', '.join(user_roles) or 'none'}."
),
)
return await func(*args, **kwargs)
return wrapper
return decorator
# --- Wiring it together ---
app = FastAPI()
@app.get("/admin/reports")
@require_role("admin")
async def get_reports(current_user: dict = Depends(get_current_user)):
return {"reports": fetch_reports()}
@app.delete("/admin/users/{user_id}")
@require_role("admin", "superuser")
async def delete_user(user_id: int, current_user: dict = Depends(get_current_user)):
remove_user(user_id)
return {"deleted": user_id}
Decorator stacking order matters in FastAPI: the route decorator (@app.get) must come first (outermost), followed by your custom decorators. This is because FastAPI inspects the function signature at registration time to generate the OpenAPI schema — functools.wraps ensures the signature is preserved correctly.
14. Summary Table
| # | Decorator | Use Case | Key Library / Module |
|---|---|---|---|
| 1 | @retry | Transient network/DB failures | time, logging |
| 2 | @ttl_cache | Memoize with expiry, e.g. config reads | threading, time |
| 3 | @timed | Log slow functions above a threshold | time.perf_counter, logging |
| 4 | @rate_limit | Cap calls to external APIs | threading.Lock, collections.deque |
| 5 | @validate_types | Catch type errors in dev/test | inspect, typing.get_type_hints |
| 6 | @deprecated | Guide callers to replacements | warnings.warn |
| 7 | @singleton | Ensure a single instance (e.g. DB pool) | threading.Lock |
| 8 | @require_role | Role-based access control in FastAPI | fastapi.Depends, functools.wraps |
15. Common Pitfalls and How to Avoid Them
Forgetting functools.wraps. Every wrapper must carry @functools.wraps(func). Omitting it breaks logging, pytest discovery, Sphinx autodoc, FastAPI's OpenAPI generation, and any other tool that reads __name__ or __doc__.
Mutable default arguments in decorator factories. exceptions=(Exception,) is fine because tuples are immutable. If you use a list default, each call shares the same list object — the classic Python mutable default trap.
Thread safety in stateful decorators. The @ttl_cache and @rate_limit decorators above use threading.Lock. If you write a stateful decorator without a lock, you will encounter race conditions under any multi-threaded server (Gunicorn with threads, uvicorn with multiple workers sharing state, etc.).
Decorator order on class methods. When stacking decorators on a method, decorators are applied from bottom to top (inner to outer). @staticmethod or @classmethod should generally be the innermost (bottom-most) decorator; your custom decorator sits above it.
Pickling. Functions decorated with lru_cache cannot be pickled, which breaks multiprocessing. Use @ttl_cache (above) or clear the cache before forking.
16. Further Reading
- PEP 318 – Decorators for Functions and Methods
- docs.python.org — functools
- Real Python — Primer on Python Decorators
- Ramalho, L. Fluent Python, 2nd ed. O'Reilly Media, 2022. Chapter 9: Decorators and Closures.
- FastAPI — Dependencies
Conclusion
Decorators are functions that return functions. That single idea, combined with functools.wraps and the three-layer argument pattern, is enough to build every decorator in this guide. The eight production patterns above — retry, TTL cache, execution timer, rate limiter, type validator, deprecation, singleton, and FastAPI role auth — cover the overwhelming majority of cross-cutting concerns you will encounter in a modern Python backend. Copy them, adapt the parameters to your environment, and keep your business logic free of boilerplate.