Difference between revisions of "Intel Trace Collector/Analyzer"

From HPC Wiki
Jump to navigation Jump to search
m
 
Line 1: Line 1:
 +
[[Category:HPC-Developer]]
 
=== MPI Profiling with Intel Trace Collector/Analyzer ===
 
=== MPI Profiling with Intel Trace Collector/Analyzer ===
  

Latest revision as of 07:22, 4 September 2019

MPI Profiling with Intel Trace Collector/Analyzer

Intel Trace Collector/Analyzer are powerful tools that acquire/display information on the communication behavior of an MPI program. Performance problems related to MPI can be identified by looking at timelines and statistical data. Appropriate filters can reduce the amount of information displayed to a manageable level.

In order to use Trace Collector/Analyzer you have to prepare your shell environment (e.g. with modules). This section describes only the most basic usage patterns. Complete documentation can be found on Intel’s ITAC website, or in the Trace Analyzer Help menu. Please note that tracing is currently only possible when using Intel MPI.

Trace Collector (ITC)

ITC is a tool for producing tracefiles from a running MPI application. These traces contain information about all MPI calls and messages and, optionally, on functions in the user code.

It is possible to trace your application without rebuilding it by dynamically loading the ITC profiling library during execution. The library intercepts all MPI calls and generates a trace file. To start the trace, simply add the -trace option to your mpirun command, e.g.: $ mpirun -trace -n 4 ./myApp.

In some cases, your application has to be rebuild to trace it, for example if it is statically linked with the MPI library or if you want to add user function information to the trace. To include the required libraries, you can use the -trace option during compilation and linking. Your application can then be run as usual, for example:

$ mpicc -trace myApp.c -o myApp $ mpirun -n 4 ./myApp

You can also specify other profiling libraries, for a complete list please refer to the ITC User Guide.

After an MPI application that has been compiled or linked with ITC has terminated, a collection of trace files is written to the current directory. They follow the naming scheme <binary-name>.stf* and serve as input for the Trace Analyzer tool. Keep in mind that depending on the amount of communication and the number of MPI processes used, these trace files can become quite large. To generate one single file instead of several smaller files, specify the option -genv VT_LOGFILE_FORMAT=SINGLESTF in your mpirun call.

Trace Analyzer (ITA)

The <binary-name>.stf* file produced after running the instrumented MPI application should be used as an argument to the traceanalyzer command:

traceanalyzer <binary-name>.stf

The trace analyzer processes the trace files written by the application and lets you browse through the data. Click on „Charts-Event Timeline“ to see the messages transferred between all MPI processes and the time each process spends in MPI and application code, respectively. Click and drag lets you zoom into the timeline data (zoom out with the „o“ key). „Charts-Message profile“ shows statistics about the communication requirements of each pair of MPI processes. The statistics displays change their content according to the currently displayed data in the timeline window. Please consider the Help menu or the ITAC User Guide to get more information.

Author: Katrin Nusser (RRZE) & Thomas Zeiser (RRZE)