Page 11 - EETEurope FlipBook February
P. 11

EE|Times EUROPE   11



                EMBEDDED
               Performance-Regression Pitfalls Every

               Project Should Avoid


               By Travis Lazar
               With proper planning and execution, continuous performance-regression testing can
               be a powerful tool for hardware as well as software projects, enabling servers that
               support data center needs throughout the server life cycle.



                         hether they are cloud service providers, co-location data   the predictable software environments that data center operators and
                         center operators, or enterprises running a private cloud,   end users demand. Because bare-metal testing like this is such a com-
                         data center operators have three key demands: reliability,   plex task, involving thousands of moving parts, we’ll discuss some of
               Wavailability, and performance. Developing servers that   the big-picture insights, including the pitfalls and how to avoid them,
               can support those needs throughout the life cycle of a server product   that we’ve gained along the way. These lessons can be applied to any
               requires more than simple one-time testing. Given the ever-changing   software project leveraging modern DevOps technologies.
               nature of the software ecosystem and the breadth of software used
               in the data center, a multiyear approach to performance-regression   CONTINUOUS PERFORMANCE REGRESSION 101
               testing is required.                                  We define continuous performance-regression testing as repeatedly
                 This can present major challenges. Every day, thousands of software   evaluating the performance of each workload on a continuous and
               packages release updates into the data center ecosystem that present   indefinite cadence. The performance measurements here are done in
               a technical burden for hardware developers and data center opera-  a fully integrated environment, meaning that we test using the entire
               tors who don’t necessarily know how their hardware or infrastructure   hardware and software stack. This means, in turn, that one of many
               will be used in the future. This reality is a win for the development   components might change between test runs. These changes can be
                                             communities, but it requires   from the firmware, operating system, kernel, libraries, or other compo-
               Continuous                    a big-picture approach for   nents (see Figure 1).
                                             hardware developers in how
                                                                       Each result then provides a point of comparison for each subsequent
               performance-regression        they tackle continuous    result. The goal is to provide actionable information with an actionable
                                             performance testing.    workflow:
               testing and analysis           The historical process of   • Has performance changed in a negative way (regressed)?
               deliver valuable insights     performance-regression test-  • Has performance changed in a positive way (improved)?
                                             ing has often been static. A   • Is the change problematic?
               into how systems behave       pretty typical approach might   •  If so, how can the issue be reproduced to enable the team to
                                             be to develop a shell script   debug the issue?
               under a wide variety of       that outputs a performance   • Is the change beneficial?
               software deployments,         result, run the test as part of a   • If so, what can be learned from it and applied in other areas?
                                             standard build flow, compare   The depth of the software stack here means that the amount of
               optimizing workloads.         the result at build time with a   change we could see is substantial. Testing every individual commit or
                                             baseline value, and pass/fail   version update is not practical or useful. We effectively test “as often as
                                             the test based on that result.
               This has two major drawbacks: It only runs the test at build time, and
               the baseline value used in determining pass/fail is unlikely to change
               throughout the life cycle of the test. Fixing the problem would require
               either manual maintenance of the test suite (which is costly and error-
               prone) or adoption of continuous performance-regression techniques.
                 Continuous performance-regression testing is a methodology for
               analyzing system performance throughout the lifetime of the product.
               It involves the entire software stack, from firmware up to user-space
               applications, addressing the widest possible range of application
               configurations. Crucially, it doesn’t just test with the initial config-
               urations but also continues to test changes to the ecosystem as they
               evolve over time.
                 The hardware and firmware development worlds have typically
               lagged modern software communities in their approach to automation
               and continuous development activities. This is often due to the amount
               of time it takes to develop even a single generation of hardware cou-
               pled with large amounts of legacy process and tooling.
                 When applied to hardware and firmware testing, continuous
               performance-regression testing and analysis deliver valuable insights   Figure 1: Core elements of the software stack used in continuous
               into how systems behave under a wide variety of software deployments.   performance-regression testing encompass firmware up to
               The information is critical to optimizing workloads and maintaining   workloads run using user-space applications.


                                                                                          www.eetimes.eu | FEBRUARY 2021
   6   7   8   9   10   11   12   13   14   15   16