Note the use of the double underline in the probe name. In a DTrace script using the probe, the double underline needs to be replaced with a hyphen, so transaction-start is the name to document for users. Add the macro call to the appropriate location in the source code. In this case, it looks like the following:. After recompiling and running the new binary, check that your newly added probe is available by executing the following DTrace command.
You should see similar output:. You should take care that the data types specified for a probe's parameters match the data types of the variables used in the macro. Otherwise, you will get compilation errors.
On most platforms, if PostgreSQL is built with --enable-dtrace , the arguments to a trace macro will be evaluated whenever control passes through the macro, even if no tracing is being done. This is usually not worth worrying about if you are just reporting the values of a few local variables. But beware of putting expensive function calls into the arguments.
If you need to do that, consider protecting the macro with a check to see if the trace is actually enabled:. Development Versions: devel. Unsupported versions: 9. This documentation is for an unsupported version of PostgreSQL. You may want to view the same page for the current version, or one of the other supported versions listed above instead. PostgreSQL 9. Monitoring Database Activity Next. Compiling for Dynamic Tracing By default, probes are not available, so you will need to explicitly tell the configure script to make the probes available in PostgreSQL.
Built-in Probes A number of standard probes are provided in the source code, as shown in Table ; Table shows the types used in the probes. Table Those core dumps may then be examined at your leisure, giving you time to get more than just a backtrace because you're not holding up the backend's execution while you think and type.
If you are trying to find out the cause of an unexpected error, the most useful thing to do is to set a breakpoint at errfinish before you let the backend continue:. Now, in your connected psql session, run whatever query is needed to provoke the error.
When it happens, the backend will stop execution at errfinish. Collect your backtrace with bt , then quit or, possibly, cont if you want to do it again. You may want to adjust those settings to avoid having to continue through a bunch of unrelated messages. GDB will automatically interrupt the execution of a program if it detects a crash. So, once you've attached gdb to the backend you expect to crash, you just let it continue execution as normal and do whatever you need to to make the backend crash.
At the gdb prompt you can enter the bt command to get a stack trace of the crash, then cont to continue execution. When gdb reports the process has exited, use the quit command. Alternately, you can collect a core file as explained below, but it's probably more hassle than it's worth if you know which backend to attach gdb to before it crashes. It's a lot harder to get a stack trace from a backend that's crashing when you don't know why it's crashing, what causes a backend to crash, or which backends will crash when.
For this, you generally need to enable the generation of core files, which are debuggable dumps of a program's state that are generated by the operating system when the program crashes.
This article provides a useful primer on core dumps on Linux. Generally, adding "ulimit -c unlimited" to the top of the PostgreSQL startup script and restarting postgresql is sufficient to enable core dump collection. Make sure you have plenty of free space in your PostgreSQL data directory, because that's where the core dumps will be written and they can be fairly big due to Pg's use of shared memory. On a Linux system it's also worth changing the file name format used for core dumps so that core dumps don't overwrite each other.
I suggest core. See man 5 core. To apply the settings change just run echo core. You should see a core file appear in your postgresql data directory. Once you've enabled core dumps, you need to wait until you see a backend crash. A core dump will be generated by the operating system, and you'll be able to attach gdb to it to collect a stack trace or other information. You need to tell gdb what executable file generated the core if you want to get useful backtraces and other debugging information.
To do this, just specify the postgres executable path then the core file path when invoking gdb, as shown below. For example:. Now you can debug it as if it was a normal running postgres, as discussed in the sections above. For example, having just forced a postgres backend to crash with kill -ABRT , I have a core file named core.
This example shows a stack trace that does not include function arguments. There may or may not be function arguments on your system, depending on obscure details largely outside your control, like whether or not Postgres was originally built to omit frame pointers, DWARF version, etc.
In general, the situation with getting backtraces on mainstream Linux platforms has improved significantly since this example backtrace was originally added. In general, the more information that you can provide for debugging, the better. If you don't have proper symbols installed, specify the wrong executable to gdb or fail to specify an executable at all, you'll see a useless backtrace like this following one:.
If you get something like that, don't bother sending it in. A few standard trace points are provided in the source code of course, more can be added as needed for a particular problem. These are shown in Table Note how the double underline in trace point names needs to be replaced by a hyphen when using D script. When executed, the example D script gives output such as:. You should remember that trace programs need to be carefully written and debugged prior to their use, otherwise the trace information collected may be meaningless.
In most cases where problems are found it is the instrumentation that is at fault, not the underlying system. When discussing information found using dynamic tracing, be sure to enclose the script used to allow that too to be checked and discussed.
New trace points can be defined within the code wherever the developer desires, though this will require a recompilation. If you are unsure where the postgresql. In this case, we can see the path to the postgresql. Now just open that file with your favorite text editor and we can start changing settings:.
This will be useful later on, and retrieving the path is a matter of another simple SHOW statement:. On some installations, the configuration file and the data directory will be along the same path, while in others like this example , they are different. Either way, copy down this data directory path for later use.
0コメント