The most accurate test scenarios come from your real user traffic. If you are an mPulse user you can work with Akamai to create a test plan that includes suggested test paths, distribution of traffic, think times, peak data and more. It provides very specific recommendations based on real data. If you don't have mPulse, there are a variety of ways to determine how to define the scenarios that will give you the best possible coverage. Once you do, you can use the attached scenario guides to document your flows and supporting data.
The browser scenario doc is used for most test situations. When we are also doing web services calls or using a device that does not communicate to the backend through a browser, it's also useful to send the headless scenario doc. Check with your PE if you have any questions. Below is text from the browser doc that you can use if you end up asking customers in email about scenario distribution, think times or ramp times. You should not simply ask them the questions without providing some guidance.
% User Breakdown for Scenarios
For each of the scenarios in this document, identify the percentage of the total user load that should be exercised for each scenario. The scenarios should be distributed to either reproduce expected traffic patterns or stress/test a specific part of the infrastructure. This should be based on an understanding of the typical user activities on the application. The closer these numbers are to reality, the better the results of the test. The total should add to 100%. If you don't know, you can divide the scenarios up evenly and we can adjust as necessary, or you can discuss the criteria for scenario distribution with a SOASTA Performance Engineer.
• Scenario 1 (Login): 25%
• Scenario 2 (Search): 15%
• Scenario 3 (Browse site): 50%
• Scenario 4 (Post blog entry): 10%
We will need to talk about ramp times prior to the test to determine how quickly we want to place load on the system. For the first test(s) we typically recommend a relatively conservative approach, particularly if you’ve never tested the application and infrastructure to the target load. Normally, we suggest spending about half the test ramping to the target traffic goal. We can see how the system responds at different load levels along the way and potentially make adjustments, if necessary. For subsequent tests we'll establish the ramp rates based on what we learn and/or based on any specific goals of the test: for example, we’d accelerate the load to understand what happens if we get a sudden spike of users.
Different scenarios can ramp at different rates.
Think Time is the amount of time a user takes to comprehend or fill out a page (or “think”) prior to moving onto the next page. Think time ensures each scenario emulates proper real world usage for the application and system under test. For example, a user navigates to a login page, types in their username & password, and then selects the login button. The time spent typing their username & password is considered “think time” or time between page requests made to the web/app servers.
With some tools, testers eliminate think times so they can put more stress on the system without having to deploy too many resources. Because we have access to the resources of the cloud we can put in think times that most closely emulate how a user, or the game, is going to interact with the system.
Please identify the think time, for each step of each scenario in this document. This should be based on an understanding of the typical user activities on the application. The closer these numbers are to reality, the better the results of the test.
Seed data is usually provided via a CSV file, though other formats are generally acceptable. Real values are often used for things like product codes, locations and other public information. Dummy data is used for what would otherwise be considered personally identifiable (PIII) or confidential information, such as SSN, login and password, or customer ID.