CloudTest Project/Engagement Process

Document created by Dave Murphy Employee on Jul 11, 2017Last modified by Dave Murphy Employee on Jul 11, 2017
Version 2Show Document
  • View in full screen mode

This article outlines the high-level process to be followed for most CloudTest load and performance projects.   

 


  • Initial scoping meeting
    • Verbal discussion to provide an early estimate on test preparation time for business processes
    • First opportunity to guide business owner toward certain business processes; emphasis on primary performance flows, not all possible functional flows
    • Kickoff meeting
      • Goals of kickoff meeting (agenda topics):
        • Define testing goals
        • Provide initial scope estimate based on initial scoping meeting
        • Finalize business processes (scenarios) to model
        • Identify date requirements (i.e. go-live date) and establish date of first test, estimated number of tests, date when tests need to be created, etc.
        • Discuss Monitoring requirements
        • Schedule test planning meeting, typically within 1-2 days (after completed scenario doc received)
        • Schedule monitoring planning meeting, typically within 1-2 days before testing
      • After meeting: provide business process/scenario template to business owner to document finalized business processes
      • After meeting: provide monitoring FAQ and worksheet to business owner
    • Receive completed scenario document from business owner
      • Performance Engineer to review scenario document to identify any incorrect/missing data; verify URLs, logins, etc.
      • Refine scope estimate, if necessary
      • Provide feedback to business owners about any implications to the schedule
    • Test Planning meeting
      • Verbal walkthrough of scenarios; this is an opportunity for the Performance Engineer to verify their understanding of the documented business processes (test scenarios)
      • Confirm test details:
        • Timing of test (# hours, day, time of day, etc.)
        • Number of users
        • Percentage breakdown of users by scenario
        • Ramp-up times
        • Confirm expected length of each scenario (i.e. verify think times)
      • Collect any missing source data (usernames, passwords, etc.)
      • Identify important result metrics
      • Document all relevant decisions in a test planning document
    • Record/analyze scenarios
      • The Performance Engineer will record the business processes to identify any potential red flags in terms of building the tests
      • Refine scope estimate and implications to the schedule, if necessary
    • Build tests
    • Follow-up meeting to discuss test scenarios, if necessary
      • Address any additional questions regarding the scenarios
      • Update team with regard to the number of estimated hours to complete all required scenarios
    • Monitoring setup meeting
      • Setup any desired monitoring for the test
      • Confirm monitoring is working correctly
    • Smoke/shakedown test
      • The goal of this test is to verify that all the test scenarios are functioning correctly
        • Depending on the environment the tests are executed against, these tests are typically in the 50-200 user range and run for about 30 minutes
        • The minimum recommendation is to run 5 virtual users per test scenario
        • If unique seed data is used, it is recommended to run the test long enough to verify all seed data is valid
      • Calibration test
        • The goal of this test is to verify how many virtual users can be executed per load server
        • This calibration process needs to be done for each test scenario
        • These tests typically execute between 500-1000 users
        • A calibration process has been documented by Akamai and available for use
      • Pre-test checklist (at least one day before test)
        • Obtain necessary internal approvals to run test
        • Confirm with hosting/CDN provider(s), as necessary
          • Note: Akamai requires a much longer lead-time
        • Confirm with all 3rd-party vendors
        • Confirm with all 3rd-party dependencies (i.e. internal web services, other applications)
        • Complete calibration tests
        • Create Test Composition
        • Establish concall/webex/calendar invites – send notifications
        • Pre-test checklist (day of test)
          • Cycle services/servers, if appropriate
          • Confirm monitoring is working properly
          • Confirm application is working properly
          • Confirm all setting changes are applied to servers prior to test
          • Run a sanity test of 5 users per test scenario to make sure nothing has changed on the application/infrastructure side
      • Load test session(s)
        • Notify broad team that testing is starting
        • Start load servers
        • Verify load servers functioning correctly
        • Run a second sanity test of 5 users per test scenario from each of the load servers; this verifies there aren’t any IP/whitelist issues from the cloud
        • Execute test(s)
        • Stop load servers
        • Notify broad team that testing is complete
      • Result report creation
        • Distribute report to all stakeholders

** Note that the steps above are the most common, high-level items that need to be handled in the test process.  Please refer to the Stress testing playbook, which is part of the Engagement Guide Spreadsheet, for an example of a detailed step-by-step customer-oriented checklist for testing.  You can also take a quick look at this PDF.

Attachments

    Outcomes