Python Observability
Understanding Logging in Python: Beyond Print Statements
Logging is a structured, configurable way to understand what your program is doing — in development and in production.
What is Logging?
Logging records events while your code runs. Unlike ad-hoc print() statements, logging gives you levels, timestamps, and consistent formatting. You can route logs to the console, files, or external systems — and change verbosity without editing code.
Why it matters: In background jobs, APIs, and distributed systems, logs are your time-stamped trail of what happened, when, and where.
Logging vs Print
| Logging | |
|---|---|
| Hard-coded; remove manually | Configurable levels (DEBUG → CRITICAL) |
| Console only | Console, file, syslog, cloud sinks |
| No structure | Timestamps, module/function, level |
| Doesn’t scale | Designed for production observability |
Core Log Levels
- DEBUG — Deep diagnostics for development.
- INFO — High-level milestones (startup, shutdown, config).
- WARNING — Unexpected but non-fatal conditions.
- ERROR — Operation failed; handled gracefully.
- CRITICAL — Service-threatening failure.
import logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(name)s: %(message)s"
)
log = logging.getLogger("app")
log.debug("Loaded experimental feature flags")
log.info("Server started on port 8000")
log.warning("Cache nearing capacity")
log.error("DB connection failed; retry scheduled")
log.critical("Payment pipeline unavailable")Setting Log Level from Environment
Control verbosity without code changes by reading LOG_LEVEL from the environment.
import os, logging
level_name = os.getenv("LOG_LEVEL", "INFO").upper()
level = getattr(logging, level_name, logging.INFO)
logging.basicConfig(
level=level,
format="%(asctime)s [%(levelname)s] %(name)s: %(message)s"
)
logging.getLogger(__name__).info("Log level set to %s", level_name)Tip Start with
INFO in production, switch to DEBUG only when actively investigating.Handlers: Console and File
Use handlers to send the same log record to multiple destinations (e.g., console + file).
import logging, sys
logger = logging.getLogger("multi")
logger.setLevel(logging.DEBUG)
fmt = logging.Formatter("%(asctime)s [%(levelname)s] %(name)s: %(message)s")
console = logging.StreamHandler(stream=sys.stdout)
console.setLevel(logging.INFO)
console.setFormatter(fmt)
fileh = logging.FileHandler("app.log", encoding="utf-8")
fileh.setLevel(logging.DEBUG)
fileh.setFormatter(fmt)
logger.addHandler(console)
logger.addHandler(fileh)
logger.info("Visible in console and file")
logger.debug("Visible only in file (DEBUG level)")Timing Work with Logging
Measure slow paths by logging elapsed time. This keeps performance insights alongside normal app logs.
import logging, time
logging.basicConfig(level=logging.INFO, format="%(asctime)s [%(levelname)s] %(message)s")
def expensive():
time.sleep(0.25) # simulate work
t0 = time.perf_counter()
expensive()
dt_ms = (time.perf_counter() - t0) * 1000
logging.info("expensive() completed in %.1f ms", dt_ms)Minimal “Log to File” Snippet
Drop this into any script to write a rotating file of logs.
import logging
from logging.handlers import RotatingFileHandler
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
handler = RotatingFileHandler(
filename="app.log",
maxBytes=1_000_000,
backupCount=5,
encoding="utf-8"
)
handler.setFormatter(logging.Formatter(
"%(asctime)s [%(levelname)s] %(name)s: %(message)s"
))
logger.addHandler(handler)
logger.info("File logging is configured!")Best Practices
- Use structured messages — Log key fields (IDs, counts, durations).
- Don’t log secrets — Mask tokens, passwords, PII.
- Be consistent — Standardize formats across services.
- Prefer INFO in prod — Elevate to DEBUG temporarily.
- Add context — Include request IDs or correlation IDs.
- Rotate files — Use
RotatingFileHandlerto cap size.
Comments
Post a Comment