This is an overview of the tools and practices I've used for debugging or profiling purposes. This is not necessarily complete, there are so many tools so I'm listing only what I think is best or relevant. If you know better tools or have other preferences, please comment below.

Logging * Yes, really. Can't stress enough how important it is to have adequate logging in your application. You should log important stuff. If your logging is good enough, you can figure out the problem just from the logs. Lots of time saved right there. If you do ever litter your code with print statements stop now. Use logging.debug instead. You'll be able to reuse that later, disable it altogether and so on ...

Tracing * Sometimes it's better to see what gets executed. You could run step-by-step using some IDE's debugger but you would need to know what you're looking for, otherwise the process will be very slow. In the stdlib there's a trace module which can print all the executed lines amongst other this (like making coverage reports) python -mtrace --trace script.py This will make lots of output (every line executed will be printed so you might want to pipe it through grep to only see the interesting modules). Eg: python -mtrace --trace script.py | egrep '^(mod1.py|mod2.py)' Alternatives * Grepping for relevant output is not fun. Plus, the trace module doesn't show you any variables. Hunter is a flexible alternative that allows filtering and even shows variables of your choosing. Just pip install hunter and run: PYTHON_HUNTER="F(module='mod1'),F(module='mod2')" python script.py Take a look at the project page for more examples. ― If you're feeling adventurous then you could try smiley - it shows you the variables and you can use it to trace programs remotely. Alternativelly, if you want very selective tracing you can use aspectlib.debug.log to make existing or 3rd party code emit traces.

PDB * Very basic intro, everyone should know this by now: import pdb pdb . set_trace () # opens up pdb prompt Or: try : code that fails except : import pdb pdb . pm () # or pdb.post_mortem() Or (press c to start the script): python -mpdb script.py Once in the REPL do: c or continue

or q or quit

or l or list , shows source at the current frame

or , shows source at the current frame w or where , shows the traceback

or , shows the traceback d or down , goes down 1 frame on the traceback

or , goes down 1 frame on the traceback u or up , goes up 1 frame on the traceback

or , goes up 1 frame on the traceback <enter> , repeats last command

, repeats last command ! <stuff> , evaluates <stuff> as python code on the current frame

, evaluates as python code on the current frame everything else, evaluates as python code if it's not a PDB command

Better PDB * Drop in replacements for pdb : ipdb ( pip install ipdb ) - like ipython (autocomplete, colors etc).

) - like ipython (autocomplete, colors etc). pudb ( pip install pudb ) - curses based (gui-like), good at browsing sourcecode.

) - curses based (gui-like), good at browsing sourcecode. pdb++ ( pip install pdbpp ) - autocomplete, colors, extra commands etc.

Remote PDB * sudo apt-get install winpdb Instead of pdb.set_trace() do: import rpdb2 rpdb2 . start_embedded_debugger ( "secretpassword" ) Now run winpdb and go to File > Attach with the password . Don't like Winpdb? Use PDB over TCP * Get remote-pdb and then, to open a remote PDB on first available port, use: from remote_pdb import set_trace set_trace () # you'll see the port number in the logs To use some specific host/port: from remote_pdb import RemotePdb RemotePdb ( host = '0.0.0.0' , port = 4444 ) . set_trace () To connect just run something like telnet 192.168.12.34 4444 . Alternatively, run socat socat readline tcp:192.168.12.34:4444 to get line editing and history. Just a REPL * If you don't need a full blown debugger then just start a IPython with: import IPython IPython . embed () If you don't have an attached terminal you can use manhole.

Having segfaults? faulthandler * Rather awesome addition from Python 3.3, backported to Python 2.x Just do this and you'll get at least an idea of what's causing the segmentation fault. Just add this in some module that's always imported: import faulthandler faulthandler . enable () This won't work in PyPy unfortunately. If you can't get interactive (e.g.: use gdb ) you can just set this environment variable (GNU libc only, details): export LD_PRELOAD=/lib/x86_64-linux-gnu/libSegFault.so Make sure the path is correct - otherwise it won't have any effect (e.g.: run locate libSegFault.so ).

Quick stacktrace on a signal? faulthandler * Add this in some module that's always imported: import faulthandler import signal faulthandler . register ( signal . SIGUSR2 , all_threads = True ) Then run kill -USR2 <pid> to get a stacktrace for all threads on the process's stderr.

Memory leaks * Well, there's are plenty of tools here, some specialized on WSGI applications like Dozer but my favorite is definitely objgraph. It's so convenient and easy to use it's amazing. It's doesn't have any integration with WSGI or anything so you need to find yourself a way to run code like: >>> import objgraph >>> objgraph . show_most_common_types () # try to find objects to investigate Request 119105 function 7413 dict 2492 tuple 2396 wrapper_descriptor 1324 weakref 1291 list 1234 cell 1011 >>> objs = objgraph . by_type ( "Request" )[: 15 ] # select few Request objects >>> objgraph . show_backrefs ( objs , max_depth = 15 , highlight = lambda v : v in objs , filename = "/tmp/graph.png" ) # and plot them Graph written to /tmp/objgraph-zbdM4z.dot (107 nodes) Image generated as /tmp/graph.png And you get a nice diagram like this (warning: it's very large). You can also get dot output.