Python调试
指导思想
日志
import logging logging.basicConfig( filename = 'application.log', level = logging.WARNING, format = '[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s', datefmt = '%H:%M:%S' ) logging.error("Some serious error occurred.") logging.warning('Function you are using is deprecated.')
指定文件:logging.getLoggerClass().root.handlers[0].baseFilename
:
[12:52:35] {<stdin>:1} ERROR - Some serious error occurred. [12:52:35] {<stdin>:1} WARNING - Function you are using is deprecated.
通过.ini
或.yaml
文件配置日志(详细文档查看):
version: 1 disable_existing_loggers: true formatters: standard: format: "[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s" datefmt: '%H:%M:%S' handlers: console: # handler which will log into stdout class: logging.StreamHandler level: DEBUG formatter: standard # Use formatter defined above stream: ext://sys.stdout file: # handler which will log into file class: logging.handlers.RotatingFileHandler level: WARNING formatter: standard # Use formatter defined above filename: /tmp/warnings.log maxBytes: 10485760 # 10MB backupCount: 10 encoding: utf8 root: # Loggers are organized in hierarchy - this is the root logger config level: ERROR handlers: [console, file] # Attaches both handler defined above loggers: # Defines descendants of root logger mymodule: # Logger for "mymodule" level: INFO handlers: [file] # Will only use "file" handler defined above propagate: no # Will not propagate logs to "root" logger
读取YAML:
import yaml from logging import config with open("config.yaml", 'rt') as f: config_data = yaml.safe_load(f.read()) # python no support YAML, trans to Dict config.dictConfig(config_data)
python日志不支持yaml配置,所以先转成Dict。
其他方式有用conf文件配置的,例如logging.conf
:
[loggers] keys=root,simpleExample [handlers] keys=consoleHandler [formatters] keys=simpleFormatter [logger_root] level=DEBUG handlers=consoleHandler [logger_simpleExample] level=DEBUG handlers=consoleHandler qualname=simpleExample propagate=0 [handler_consoleHandler] class=StreamHandler level=DEBUG formatter=simpleFormatter args=(sys.stdout,) [formatter_simpleFormatter] format=%(asctime)s - %(name)s - %(levelname)s - %(message)s datefmt=
import logging import logging.config logging.config.fileConfig('logging.conf') # create logger logger = logging.getLogger('simpleExample') # 'application' code logger.debug('debug message') logger.info('info message') logger.warning('warn message') logger.error('error message') logger.critical('critical message')
日志装饰器
当需要对有bug代码输出日志时,除了直接添加日志代码,还可以通过日志装饰器 控制日志输出:
from functools import wraps, partial import logging # Helper function that attaches function as attribute of an object def attach_wrapper(obj, func=None): if func is None: return partial(attach_wrapper, obj) setattr(obj, func.__name__, func) return func def log(level, message): # Actual decorator def decorate(func): logger = logging.getLogger(func.__module__) # Setup logger formatter = logging.Formatter( '%(asctime)s - %(name)s - %(levelname)s - %(message)s') handler = logging.StreamHandler() handler.setFormatter(formatter) logger.addHandler(handler) log_message = f"{func.__name__} - {message}" # Logs the message and before executing the decorated function @wraps(func) def wrapper(*args, **kwargs): logger.log(level, log_message) return func(*args, **kwargs) # Attaches "set_level" to "wrapper" as attribute # Function that allows us to set log level @attach_wrapper(wrapper) def set_level(new_level): nonlocal level level = new_level # Attaches "set_message" to "wrapper" as attribute # Function that allows us to set message @attach_wrapper(wrapper) def set_message(new_message): nonlocal log_message log_message = f"{func.__name__} - {new_message}" return wrapper return decorate # Example Usage @log(logging.WARN, "example-param") def somefunc(args): return args somefunc("some args") # Change log level by accessing internal decorator function somefunc.set_level(logging.CRITICAL) # Change log message by accessing internal decorator function somefunc.set_message("new-message") somefunc("some args")
Not gonna lie, this one might take a bit to wrap your head around
(you might want to just copy-paste it and use it). The idea here is that log
function takes the arguments and makes them available to inner wrapper
function.
These arguments are then made adjustable by adding the accessor functions,
which are attached to the decorator. As for the functools.wraps
decorator -
if we didn't use it here, name of the function (func.__name__
) would get
overwritten by name of the decorator. But that's a problem, because we want to
print the name. This gets solved by functools.wraps
as it copies function
name, docstring and arguments list onto the decorator function.
_repr_
增加可读性
class Circle: def __init__(self, x, y, radius): self.x = x self.y = y self.radius = radius def __repr__(self): return f"Rectangle({self.x}, {self.y}, {self.radius})" c = Circle(100, 80, 30) repr(c) # Circle(100, 80, 30)
If representing object as shown above is not desirable or not possible,
good alternative is to use representation using <...>
,
e.g. <_io.TextIOWrapper name='somefile.txt' mode='w' encoding='UTF-8'>
.
Apart from __repr__
, it's also a good idea to implement __str__
method
which is by default used when print(instance)
is called. With these 2 methods
you can get lots of information just by printing your variables.
__missing__
Dunder Method For Dictionaries
If you for whatever reason need to implement custom dictionary class, then you
can expect some bugs arising from KeyError
s when you try to access some key
that doesn't actually exist. To avoid having to poke around in the code and see
which key is missing, you could implement special __missing__
method, which is
called every time KeyError
is raised.
class MyDict(dict): def __missing__(self, key): message = f'{key} not present in the dictionary!' logging.warning(message) return message # Or raise some error instead
The implementation above is very simple and only returns and logs message with the missing key, but you could also log other valuable information to give you more context as to what went wrong in the code.
Debugging Crashing Application
If your application crashes before you get a chance to see what is going on in it, you might find this trick quite useful.
Running the application with -i
argument (python3 -i app.py
) causes it to
start interactive shell as soon as the program exits. At that point you can
inspect variables and functions.
If that's not good enough, you can bring a bigger hammer - pdb
- Python
Debugger. pdb
has quite a few features which would warrant an article on its
own. But here is example and a rundown of the most important bits. Let's first
see our little crashing script:
# crashing_app.py SOME_VAR = 42 class SomeError(Exception): pass def func(): raise SomeError("Something went wrong...") func()
Now, if we run it with -i
argument, we get a chance to debug it:
# Run crashing application ~ $ python3 -i crashing_app.py Traceback (most recent call last): File "crashing_app.py", line 9, in <module> func() File "crashing_app.py", line 7, in func raise SomeError("Something went wrong...") __main__.SomeError: Something went wrong... >>> # We are interactive shell >>> import pdb >>> pdb.pm() # start Post-Mortem debugger > .../crashing_app.py(7)func() -> raise SomeError("Something went wrong...") (Pdb) # Now we are in debugger and can poke around and run some commands: (Pdb) p SOME_VAR # Print value of variable 42 (Pdb) l # List surrounding code we are working with 2 3 class SomeError(Exception): 4 pass 5 6 def func(): 7 -> raise SomeError("Something went wrong...") 8 9 func() [EOF] (Pdb) # Continue debugging... set breakpoints, step through the code, etc.
Debugging session above shows very briefly what you could do with pdb
. After
program terminates we enter interactive debugging session. First, we import pdb
and start the debugger. At that point we can use all the pdb commands. As an
example above, we print variable using p
command and list code using l
command. Most of the time you would probably want to set breakpoint which you
can do with b LINE_NO
and run the program until the breakpoint is hit (c
)
and then continue stepping through the function with s
, optionally maybe
printing stacktrace with w
. For a full listing of commands you can go over to
pdb docs.
Inspecting Stack Traces
Let's say your code is for example Flask or Django application running on
remote server where you can't get interactive debugging session. In that case
you can use traceback
and sys
packages to get more insight on what's failing
in your code:
import traceback import sys def func(): try: raise SomeError("Something went wrong...") except: traceback.print_exc(file=sys.stderr)
When ran, the code above will print the last exception that was raised. Apart
from printing exceptions, you can also use traceback
package to print
stacktrace (traceback.print_stack()
) or extract raw stack frame, format it and
inspect it further (traceback.format_list(traceback.extract_stack())
).
Reloading Modules During Debugging
Sometimes you might be debugging or experimenting with some function in
interactive shell and making frequent changes to it. To make the cycle of
running/testing and modifying easier, you can run importlib.reload(module)
to
avoid having to restart the interactive session after every change:
>>> import func from module >>> func() "This is result..." # Make some changes to "func" >>> func() "This is result..." # Outdated result >>> from importlib import reload; reload(module) # Reload "module" after changes made to "func" >>> func() "New result..."
This tip is more about efficiency than debugging. It's always nice to be able to skip a few unnecessary steps and make your workflow faster and more efficient. In general, reloading modules from time to time is good idea, as it can help you avoid trying to debug code that was already modified bunch of times in the meantime.