Difference between revisions of "Relevant test case"

From HPC Wiki
Jump to navigation Jump to search
Line 4: Line 4:
  
  
A relevant test case is a combination of data set, application and parameters which reflects production behaviour or at least allow assumptions about real operating point (to be proven). The definition of a relevant test case is essential and vital for performance engeneering both for developing an application and for efficient use of (even blackboxed) application. Typically it need (at least basic) knowledge about algorithms used in the application, prediction about needed size of computation jobs, and of course which features of software will be used.
+
A relevant test case is a combination of data set, application and parameters which reflects production behaviour or at least allow assumptions (to be proven) about real operating point. The definition of a relevant test case is essential and vital for performance engeneering both for developing an application and for efficient use of (even blackboxed) application. Typically it need (at least basic) knowledge about algorithms used in the application, prediction about needed size of computation jobs, and of course which features of software will be used.
  
In the first approximation  a [set of] real production test cases[s] is a relevant test case for himself, sic! However these test cases are typicall large, long-running and unhandy to impossible to analyse, so a ''reduced'' relevant test case is needed.  
+
In the first approximation  a [set of] real production test cases[s] is a relevant test case for itself, sic! However these test cases are typicall large, long-running and unhandy till impossible to analyse, so a ''reduced'' relevant test case is needed. Typical ways to get a reduced test case are using a real data set and then reduce the size of data set (e.g. grid resolution), crop the execution after a handful of iterations (prior reaching the convergence), or also elimination some
  
 
* The same software path as in production must be used. Needless to say that if you use in production a compute kernel A, a data set for kernel B would be ''not'' relevant.
 
* The same software path as in production must be used. Needless to say that if you use in production a compute kernel A, a data set for kernel B would be ''not'' relevant.
* The hotspots of production runs must also be hotspost in the reduced relevant test case (rule-of-thumb: about of the half of overall excution time should be in hotspots). This in turn lead to the rule: Do not downsize too much (for performance engineering
+
* The hotspots of production runs must also be hotspost in the reduced relevant test case (rule-of-thumb: about of the half of overall excution time should be in hotspots). This in turn lead to the rule: Do not downsize too much (for performance engineering TBD

Revision as of 13:07, 11 May 2020



A relevant test case is a combination of data set, application and parameters which reflects production behaviour or at least allow assumptions (to be proven) about real operating point. The definition of a relevant test case is essential and vital for performance engeneering both for developing an application and for efficient use of (even blackboxed) application. Typically it need (at least basic) knowledge about algorithms used in the application, prediction about needed size of computation jobs, and of course which features of software will be used.

In the first approximation a [set of] real production test cases[s] is a relevant test case for itself, sic! However these test cases are typicall large, long-running and unhandy till impossible to analyse, so a reduced relevant test case is needed. Typical ways to get a reduced test case are using a real data set and then reduce the size of data set (e.g. grid resolution), crop the execution after a handful of iterations (prior reaching the convergence), or also elimination some

  • The same software path as in production must be used. Needless to say that if you use in production a compute kernel A, a data set for kernel B would be not relevant.
  • The hotspots of production runs must also be hotspost in the reduced relevant test case (rule-of-thumb: about of the half of overall excution time should be in hotspots). This in turn lead to the rule: Do not downsize too much (for performance engineering TBD