15Sep/10Off

Performance Over Time

by Dan

One of the challenges of creating a performance-sensitive library is ensuring that as you make changes to the library, the performance, correctness, and accuracy of your library does not suffer. In large and complex systems with many dependencies, it is especially challenging to ensure that a change that helps one module does not hurt another.

To solve this problem, integrated into our build process is a system to capture performance of our solvers over time. Whenever we make a change, we record the performance of each and every test and store the results in a database. Alongside each test we record problem size, job codes and parameters, source code revision, tool versions, machine name, functions that failed, and many more criteria.

We then make all of this information searchable and sort-able with a simple web-app -- a simple query we can quickly recall the exact performance of any test we've ever run.

The final piece of the puzzle is that our performance tracking system compares the results of an individul test run with the last time it ran and warns us when the performance has changed significantly. With this tool we ensure that our changes do not lead to any unexpected performance regressions.

To solve this problem, integrated into our build process is a system to capture performance of our solvers over time. Whenever we make a change, and run through our tests, we record the performance of each and every routine we run and store the results in a database. Alongside each test we record problem size, variant, source code revision, tool versions, machine name, functions that failed, and many more criteria. We then make all of this information searchable and sort-able with a simple web-app -- a simple query we can quickly recall the exact performance of any test we've ever run.
Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.