Skip navigation
All Places > Web Performance > Blog
1 2 3 Previous Next

Web Performance

233 posts

What is Brotli?

Brotli is the name of a Swiss bakery product, a small, often round loaf of bread. However, we are not here to talk about that! Brotli is also a generic-purpose lossless compression algorithm developed by Google in 2015. It compresses data with a combination of the LZ77 algorithm and Huffman coding. Since then, major browsers have widely adopted this compression method, including Google Chrome, Microsoft Edge, Mozilla Firefox, Opera, and Safari.


Benefits of Brotli

Brotli algorithm is similar to Deflate in speed but offers denser compression. Its compression ratio is also considerably better than Gzip. Based on Google's summary in 2016:


  • Brotli outperforms gzip for typical web assets (e.g. CSS, HTML, JS) by 17-25%.
  • Brotli -11 density compared to gzip -9.
  • HTML (multi-language corpus): 25% savings.
  • JS (Alexa Top 10k): 17% savings.
  • Minified JS (Alexa Top 10k) 17% savings.
  • CSS (Alexa Top 10k): 20% savings.


*Alexa Top 10k refers to the top 10,000 sites ranked by one month of traffic according to


Brotli Enablement

So, how do we start using this feature? There are two ways to utilize Brotli in Akamai configurations.

  1. Resource Optimizer module: Akamai compresses the content with Brotli encoding on the edge.
  2. Brotli Support module: Akamai passes on and caches already Brotli-compressed content from the origins.


Let's take a look at each one of the modules in detail.


Resource Optimizer

What is it?

Resource Optimizer automates the compression and delivery of cached resources for websites and mobile apps, to decrease bandwidth and improve the web experience. Resource Optimizers uses Brotli and Zopfli compression to shrink resources from 5% to 15% over GZIP and delivers the smallest version depending on browser support.

How to add the module?

The Resource Optimizer module is a part of the Adaptive Acceleration feature in the ION Beta Channel. You need to upgrade the configuration to use ION Beta Channel to see this contract line item. You can add this in the Market Place on Luna or reach out to your account team for help.



Brotli Support

What is it?

When enabled, this module allows the configuration to return Brotli-compressed assets from our customers' origins and cache them on edge servers. When both the Brotli Support and LMA modules are enabled, Akamai caches both the GZIP and Brotli versions of the resource.

How to add the module?

Currently, the module is available for all accounts that have added the beta channel to their delivery product. Please reach out to your account team for enablement details.

(Unlike Resource Optimizer, the product type does not have to be ION for Brotli Support Enablement.)

Long waiting line



HTTP Head of line blocking


Head of Line blocking (in HTTP/1.1 terms) is often referring to the fact that each client has a limited number of TCP connections to a server (usually 6 connections per hostname) and doing a new request over one of those connections has to wait for the previous request on the same connection to complete before the client can make a new request.


HTTP/1.1 introduced a feature called "Pipelining" which allowed a client sending several HTTP requests over the same TCP connection. However HTTP/1.1 still required the responses to arrive in order so it didn't really solved the HOL issue and as of today it is not widely adopted.


HTTP/2 (h2) solves the HOL issue by means of multiplexing requests over the same TCP connection, so a client can make multiple requests to a server without having to wait for the previous ones to complete as the responses can arrive in any order.


TCP slows down HTTP/2


HTTP/2 does however still suffer from another type of HOL, as it runs over a TCP connection; and due to TCP's congestion control, one lost packet in the TCP stream makes all streams wait until that package is re-transmitted and received.


The obvious solution would be to run HTTP/2 over UDP + an optimized way of managing congestion, and that's precisely what the QUIC protocol does, so stay tuned for what the future of the HTTP protocol will be!


This and more, are explained in the Learning HTTP/2 O'Reiily book



Disclaimer: The content of this blog is based on a Stack Overflow post authored by Daniel Stenberg, a well known HTTP/2 expert and the main developer of the awesome curl command line utility

On September 30, 2017, Akamai will be adding IPv6 delegations to Global Traffic Manager (GTM).

The vast majority of recursive DNS resolvers (such as those at an ISP, a large enterprise, or a public provider such as Google Public DNS) are "dual-stacked", meaning they can connect to authoritative nameservers (such as Akamai's GTM servers) over IPv4 or IPv6.  Previously, Akamai separated IPv6 GTM domains into the domain (which has IPv6-only delegations), and the vast majority of GTM users were under the domain, which has IPv4-only delegations.  To bring GTM in line with our other DNS products (such as FastDNS), we will be adding IPv6 delegations to on September 30 2017.  No changes are being made to the domain at this time.

Which GTM domains are affected?

A GTM domain is affected if ALL of the following are true:

  • The GTM domain contains at least one CIDR-mapped property that is currently in use
  • The CIDR maps for that property do not currently contain IPv6 addresses
  • End users who access the CIDR-mapped property may do so using a recursive resolver that has IPv6 connectivity.  (All major public DNS providers (Google, OpenDNS) have IPv6 connectivity, as do the majority of the largest ISPs in the United States and Europe).


Properties that use Geographic Mapping or AS (Autonomous System) mapping are NOT affected by this change.

Failover, Weighted Hashed, Weighted Round-Robin, Performance, and Load Feedback properties are NOT affected by this change.

What change must be made?

By September 30, 2017,  if you are affected (see above) , you must inspect your CIDR maps, and determine if IPv6 addresses need to be added.  If any necessary IPv6 addresses are not added by September 30, 2017, some end users who were previously sent to a CIDR-specific datacenter may be sent to the default datacenter (indicated as "All Other CIDRs" in Luna) if they use a recursive resolver with IPv6 connectivity.

How to determine which IPv6 addresses to add?

There is no way to automatically determine an IPv6 address given an IPv4 address.  You will need to identify the owners of the IPv4 space , and get the corresponding IPv6 CIDRs from the owners.

NOTE: If the only entry in any given CIDR map is a single datacenter, with CIDRs that are "" or "", then no update is needed for that CIDR map.  That is the "localhost" IP address, which should never be visible on the public internet. 


Example: ACME Corp has the GTM domain, and the property "www".  They use a CIDR map so that users in their offices or on their VPN will be sent to the internal website, and other users will be sent to their external website.  They have a CIDR map, indicating that traffic from and will be sent to their internal datacenter, and all other traffic will be sent to their external datacenter.  They maintain their own recursive resolvers that employees are required to use, and do not have IPv6 connectivity.

Action needed: None.  No change is needed, since the CIDR-mapped property is only intended to be used by their employees, and their corporate resolver does not have IPv6 connectivity.

Example: WidgetCo has the domain "", and the property "customer".  They have a partnership with an ISP, GlobalNetCom.  They use a CIDR map so that GlobalNetCom end-users get sent to a specific re-branded customer portal, and all other end users get sent to their default portal.  Currently, the CIDR map for GlobalNetCom only contains and   However, GlobalNetCom recursive resolvers have both IPv4 and IPv6 connectivity

Action needed: WidgetCo will need to contact GlobalNetCom, and find out what their IPv6 space is.  Once they have that information, they'll need to update the CIDR map to include that IPv6 space (e.g. 2002::ffff:0000/120).  If that change is not made, GlobalNetCom end users using IPv6 would be sent to the default customer portal, rather than the specific re-branded one.

Customer may require to use Global traffic management (GTM) to balance load between two or more servers in a cloud, such as Amazon's EC2 or AWS ELB service. The provider provides a hostname for each cloud region that resolves to one or more virtual machines (VM) in the region.


For example, with Amazon AWS ELB service, as load changes, the service creates or destroys VMs as needed and thus results in hostname resolution changing frequently to reflect the new VM.


GTM was originally designed to load balance between datacenters with static IP address and not between VM environments or cloud based services which needs to be load balanced between changing IP addresses. By default our monitoring agents probe each datacentre based on the frequency of the liveness test configured; however only does a DNS lookup on the datacentre hostnames every 15 minutes irrespective of the DNS TTL associated.


This may pose a problem if the IP address to which Akamai liveness test has been taken down or replaced with another virtual machine within the 15 minutes default DNS lookup frequency on our agents.


In order to have GTM liveness test agents do a hostname resolution at the test time, customers will need to enable "Cloud Server Targeting" at the Data centre.


Steps to enable the "Cloud Server Targeting"  feature


1. Login to Luna Control Center

2. Go to Configure --> Traffic Management --> Configuration

3. Click on the domain which is used to load balance between Cloud based services

4. Go to "Data Centers" section and click on the respective Data centers used for the GTM domain

5. Enable the check box next to "Cloud Server Targeting"


More details on GTM is available on Luna Control Center --> Support --> User and Development Guides --> Traffic Management

Hi all,


The Learning HTTP/2 book authored by Stephen Ludin and Javier Garza is fully available since June 2017


You can get a copy of the book at, O'Reilly's Website, or read it online with an O'Reilly Safari subscription.


Here is the table of Contents:

  • Chapter 1 The Evolution of HTTP
  • Chapter 2 HTTP/2 Quick Start
  • Chapter 3 How and Why We Hack the Web
  • Chapter 4 Transition to HTTP/2
  • Chapter 5 The HTTP/2 Protocol
  • Chapter 6 HTTP/2 Performance
  • Chapter 7 HTTP/2 Implementations
  • Chapter 8 Debugging h2
  • Chapter 9 What's next?
  • Appendix A HTTP/2 Frames
  • Appendix B Tools Reference


Learning HTTP/2 book cover


Happy reading!



Akamai has acquired SOASTA and our Technical Support team is focused on the continued delivery of a great Support experience for the CloudTest, mPulse and TouchTest products.  To enhance the Support offering for these products, we are expanding the global Support team (formerly SOASTA Support) focused on these products and will incorporate it within Akamai's Support team.  As we proceed through the process of uniting resources for you, we will be changing contact points and addresses.  Those changes will be updated here.  Rest assured, we are here to help you when you need us, so please reference this post to ensure you are up to date on how to reach us.  Eventually, we will be fully incorporated in Akamai Technical Support, so you will have one place to go for all of your Akamai Technical Support needs.  In the mean time, please use the following touch points when reaching out for help with these three products:



Support Tickets


We will continue to use the previous Support ticketing system for these three products until we migrate to the same ticketing system used by the Akamai Technical Support team (planned for late August 2017).  For now, continue to submit CloudTest, mPulse and TouchTest request through:




To contact Technical Support by email when you need assistance with CloudTest, mPulse or TouchTest, please include your name, a contact number, and the product in question, in addition to the details of your request when submitting your request to the following email address:


Phone Support


Technical Support for these products will soon be accessible through the primary Akamai Technical Support phone line.  For now, please use the above methods for reaching out to Technical Support for CloudTest, mPulse and TouchTest.  If you have an urgent issue, we recommend using the ticketing system, which allows you to identify your issue as Urgent and will trigger our rapid response procedures.


General Product Questions


If you have general product questions, how-to questions or you are looking for best-practice advice, we recommending posting them in one of the product-specific community discussion areas below, where we have both active internal experts and customer contributions on those topics.





TL;DR G2O is efficient and easy to enable mechanism however not very well supported by open-source. With this LUA extension it's possible to enable G2O validation in HAProxy, NGINX or Varnish within minutes.



Signature Header Authentication (aka G2O) is a mechanism that allows the backend infrastructure to ensure that requests are coming from a trusted source, the Akamai Platfom specifically.


This is extract from the documentation:

This feature configures edge servers to include two special headers in requests to the origin server. One of these headers contains basic information about the specific request. The second header contains similar information encrypted with a shared secret. This allows the origin server to perform several levels of authentication to ensure that the request is coming directly from an edge server without tampering.


Why LUA?


G2O validation can generally be implemented in two places:

  • Web Application
  • Web Server / Load Balancer like HAProxy or NGINX


Excellent example of the integration within Web Application was done by Anthony Hogg
I didn't find much implementations for Web Servers and Load Balancers, and all of the existing solution require to compile the code which can generate some maintenance overhead.
Some servers just do not support dynamic modules at all (i.e. HAProxy) which requires to patch and build from source. Definitely not making the life easier.

However, top open-source servers like HAProxy, NGINX, Apache and Varnish - do support LUA language.
LUA is a very popular lightweight embeddable scripting language, and what is great - single codebase can be used across multiple vendors.




Project consist of a single library responsible for G2O validation, and connectors for multiple vendors.
The library is pretty generic and has external dependency on libossl.
The connectors are vendor specific and their responsibility is to ease the library integration.


Repository link: GitHub - lukaszczerpak/akamai-g2o-lua: Akamai Signature Header Authentication (G2O) for HAProxy, NGINX and other servers… 


How to install




G2O validation requires luaossl library which can be installed with luarocks.
Please refer to your distribution documentation regarding installation steps.   




akamai-g2o-haproxy-wrapper.lua registers two functions:

  • g2o_validation_fetch - which is referred from haproxy frontend section
  • g2o_failure_service - which is used from haproxy backend when validation fails. It serves 400 response with "Unauthorized Access" response body


Note: It is assumed that the configuration file and all G2O related files are stored in `/etc/haproxy` path.


In /etc/haproxy/haproxy.cfg within global section add the following line:

    lua-load /etc/haproxy/akamai-g2o-haproxy-wrapper.lua


add g2o validation to your frontend settings:

frontend g2o-example
   bind             :80
   mode             http
   use_backend      %[lua.g2o_validation_fetch(5,"s3cr3tk3y",30,"g2o-failure-backend","g2o-success-backend")]
   default_backend  g2o-failure-backend
   log     local1 debug


Parameters for g2o_validation_fetch are as follows:


  • version of G2O
  • secret key (same like in Akamai configuration file)
  • time delta - acceptable time margin for timestamp validation
  • failure backend - which backend to go when G2O validation failed
  • success backend - which backend to go when G2O validation succeeds


Now let's define the backends:

backend g2o-success-backend
   balance  roundrobin
   server   web1 X.X.X.X:80
   log local1 debug

backend g2o-failure-backend
   http-request use-service lua.g2o_failure_service
   log local1 debug


It is possible to enable G2O validation is soft-mode, so requests with invalid G2O signature will not be rejected, however proper warning will be logged. This can be done by using the same backend for failure and success actions:

   use_backend      %[lua.g2o_validation_fetch(5,"s3cr3tk3y",30,"g2o-success-backend","g2o-success-backend")]






Note: It is assumed that the configuration file and all G2O related files are stored in `/etc/nginx` path.


Enable lua module in /etc/nginx/nginx.conf first:

load_module modules/;


and load the connector in server section:

init_by_lua 'require("/etc/nginx/akamai-g2o-nginx-wrapper")';


G2O can be enabled for certain paths or entire site in the following way:

  location /nginx-g2o {
     access_by_lua 'akamai_g2o_validate_nginx(5, "s3cr3tk3y", 30)';



Parameters for akamai_g2o_validate_nginx() are as follows:

  • version of G2O
  • secret key (same like in Akamai configuration file)
  • time delta - acceptable time margin for timestamp validation





G2O itself is an interesting alternative to SiteShield or mutual cert authentication. Since LUA is supported out of the box by many servers, G2O extension can easily be set up without a need to patch or compile source code which saves a lot of maintenance effort. 

As you've no doubt heard, Akamai has acquired SOASTA and we are pleased to carry the banner of SOASTA's products forward as Akamai CloudTest, mPulse and TouchTest.  As a former CloudLink user, you’ll find all the same content as before, and more.  In fact, many of the best practices and methodologies from our Performance Engineering and Web Performance Analyst teams are now available.  


Note: You may find some broken links that point back to CloudLink, which we are aggressively fixing as we uncover them.  See below for more information.


Here are some quick links to get you started:






Please note that the discussions from CloudLink are in the process of being moved to the Akamai Community and will not show up at this time.  They should be imported soon.  If you want to ask a question you can click above or navigate to the appropriate Space and choose to ask a question and see if there may already be an answer.


If you see a link with the arrow pointing up and to the right, as shown below, that means it's pointing to URL that is not in the Akamai Community, many of which are still referencing CloudLink.   As a short-term workaround, depending on your browser, if you hover over the link you will see the title of the article.  If it's a CloudLink URL, you can search with a few words from the title to find the article in the Akamai Community.

Akshay Ranganath

devOps: FOSSL helper

Posted by Akshay Ranganath Employee Jun 21, 2017

One of the basic tenets of devOps is to create throw-away environments. We now have PAPI and configKit to perform the configuration management. However, there is no easy way to replace the FOSSL settings when building these throw-away environments.

There is a more detailed blog post from the team creating the wrapper functions about the Akamai for DevOps effort.


New "GO" based CLI has been release and the code is available akamai/cli git repo.

To aid in this process, I built a python script that can find and replace the default TLS certificate information in the PAPI rule set. The code is available here:

GitHub - akshayranganath/akamai-open-fossl: Helper module to the Akamai PAPI / Akamai configkit.  


The Github page has information usage as well. Please have a look and let me know if you find any bugs!


Here's the detail of the script, pulled from the github page.


FOSSL Helper

This module will provide helper functions to easily pull the origin's PEM file and update the Akamai configuration rules. Using the Property Manager API PAPI, the configuration can then be activated to the Akamai network.


After downloading the code, execute the following to install the libraries.

pip -r requirements.txt


The main python file is CLI usage is as follows:

$ python --help usage: [-h] --file FILE [--origin ORIGIN] [--pem_file PEM_FILE]                       [--use_sni]  Script to create the FOSSL setting for your origin and update your configuration rules  optional arguments:   -h, --help           show this help message and exit  --file FILE          PAPI Rules file to update with the FOSSL details   --origin ORIGIN      Origin server name. Using openssl, the TLS cert will be                        downloaded and stored in 'pem_file'  --pem_file PEM_FILE  Origin's PEM file to use for creating FOSSL section. If                       unspecified, a temporary file called cert.txt will be                       created.  --use_sni            Use SNI header when pulling the origin certificate


While building a secure configuration on the Akamai platform, you will need to provide the origin server's certificate information. Normally, if you have a standard Certificate Authority (CA), no special setting may be required. However, if you are using a self-signed certificate or a CA that is considered as part of standard Akamai supported set, you will need to pin the TLS certificate.

If you need to pin the origin certificate using the Property Manager interface, the UI runs the following command to extract the details:

openssl s_client -connect

However, when using the Property Manager API (PAPI) or while using the Akamai Configkit, you would need to follow these steps:

  • Pull in the origin certificate and add the details by running the openssl command. If origin is not accessible due to ACL, get the certification information from the Ops teams.
  • Run a PAPI / ConfigKit command to extract the rules for the configuration.
  • Insert the certificate details and push out a new version of the configuration.

The FOSSL helper function tries to automate this part of the job. If you already have the configuration rules, helper function will run the openssl commands and then insert the certificate details into the correct section. Using PAPI/Config Kit.

Cert Update Pipeline

Here's the steps to update your rules to use the new cert.

Step 1: Get the current configuration rules.

Suppose the configuration file at Akamai is named Here's the method to get back the rules.

./akamaiProperty retrieve --file rules.json

Step 2: Run

Assuming that the origin is not ACLed for just Akamai, you can run this script to pull the origin certificate and insert it into rules.

python --file rules.json --origin

Step 3: Update rules

After the rules have been updated with the certificate information, run the akamaiProperty command to push out the update to the configuration on Akamai.

./akamaiProperty update --file rules.json

Step 4: Activate configuration

Once the update completes, you should be able to push the configuration out to Akamai staging network and test the new setup.

./akamaiProperty activate

This will the latest configuration version to staging. Please see the documentation at Akamai ConfigKit page for more details.


This could be interest to {OPEN} Developer Community.


We have a lot of new & exciting initiatives happening around our Fast Purge functionality. Follow this page for the latest updates, schedules, and FAQs.


What is Fast Purge?

Fast Purge ensures Akamai customers have the ability to update and refresh their content within seconds.


It provides the ability to invalidate and/or delete cached content via API and UI in a rapid and predictable manner; improving offload and performance for fresh event-driven content. Before Fast Purge, customers avoided caching event-driven content, or had to cache it for very short TTLs. Fast Purge hence provides the following advantages:

  • Cache semi-dynamic base pages and API responses for long TTLs without sacrificing freshness, because the Akamai edge will refresh with your latest content as soon as your origin calls Akamai’s purge API.
  • Solves the need to update stale, outdated content in a timely, controlled way.
  • Helps businesses remain relevant, preserve credibility / reputation and reduce abandonment.
  • Grants customers more control over how they serve their content.

What's out there today

Fast Purge is already GA and being used by many customers today! All customers have the ability to purge by URLs today.


What’s coming

Fast Purge by CPCode and Cache Tags are in beta today- this gives customers the ability to purge by CPCode or a natural language identifier (cache tag) within 5 seconds. The CPCode functionality goes LA 07/24/2017 and GA’s in time for Edge’17. The Cache tag functionality enters LA in time for Edge’17 (October) and GA’s Jan’18.


FAQ - General Fast Purge


Q: How do I get access to Fast Purge?

A: Fast Purge is now a part of all base products. It should be available on your account already! If for some reason you cannot find it, please contact Akatec or your Akamai account team.


Q: How do I access the Fast Purge UI?

A: Once logged into Luna, select Publish/Fast Purge from the mega-menu.


Q: How can I use the Fast Purge API?

A: Start by reading this API documentation. You may then begin issuing API requests after generating the proper credentials using the Manage APIs app. You may also find this blogpost about how to migrate to the new API helpful.


Q: Do I still need to use the Transition App?

A: The Transition App is no longer needed.  The V3 API will by default use Fast Purge. If you want to use CCU, please stick with V2 API.


Q: Will Fast Purge be supported on older versions of the CCU API and on PublishECCU UI/API?

A: No. To leverage Fast Purge, you will need to integrate with the new CCU V3 OPEN API. With Purge-by-cache-tag you will be able to emulate most ECCU functionality.


Q: Can I use Fast Purge to refresh my HD libraries?

A: No. You can only use Fast Purge for individual URLs. Please continue to use the HD Content Control Utility for HD links.


Q: Do I use ‘purge by invalidate’ or ‘purge by delete’?

A: We recommend that you use ‘purge by invalidate’ in most cases for its better offload and origin-failure backup advantages.


Advantages of Purge by Invalidate over Delete

  • Invalidate should be the default capability for most customers because it provides:
    • Better offload + better performance.
    • If the edge server cannot reach the origin, invalidating allows us to serve stale content (typically better for customers than serving a failure page).
  • Invalidate is also better for Akamai because it is easier for edge servers to process.
  • Invalidate will cause the edge server to treat content as though its TTL has expired, resulting in an IMS if the origin supports it. If the origin does not support IMS, it is recommended to turn off IMS in configuration, but not to use Delete.


“Delete” completely removes content from all servers and should only be used if:

  • Purging illegal or copyright violating content.
  • For offensive content, which the customer feels strongly should be deleted from all servers.
  • Purging content with corrupted headers where the timestamp cannot be trusted.
  • If in a future event when the origin is unreachable, the customer would still prefer that we serve a failure page instead of the purged content.


FAQ - Fast Purge by CpCode


Q: How can I be enabled for Fast Purge by CPCode?

A: To use this functionality, you should be signed up for our beta program.


Q: What is the expected purge time?

A: All purges are expected to be completed within five seconds.


Q: How do I know my purge is done?

A: Since purges are completed almost instantly, we do not notify users about this today. You can expect your purges to be done in <5secs.


Q: Where can I find documentation around the API?

A: In depth API documentation can be found here-




FAQ - Fast Purge by Cache Tags


Q: What is a cache tag?

A: Akamai introduces a new convenient way to update cached content on its edge servers. If you have a collection of objects that tend to be refreshed at the same time, you can now associate them with a cache tag and later delete or invalidate purge all content possessing this tag with a single purge request. Cache Tags allow a website owner to add tags to items cached at the Akamai Edge. The purpose of tagging is for a customer to be able to purge cached content by calling it with a natural-language identifier, instead of purging by URL, file extension, or CPCode.


Q: What are some typical use cases associated with purging by cache tags?

A: Any use case where the user today needs to purge multiple URLs/objects is a great candidate for cache tags- they can now tag all of these URLs/objects with one tag and just send in one purge request. 

A great use case is seasonal sales on an ecommerce website e.g. Black Friday, Christmas, etc. Once the sale has ended, you can purge all the sale prices across various pages by using a cache tag (lets say ‘SALE’), instead of purging one page at a time.


Q: How are cache tags assigned?

A: You can add cache tags to your cacheable content by providing all tags for each object in an Edge-Cache-Tag HTTP response header to outbound traffic at your origin.

If there are multiple Edge-Cache-Tag HTTP response headers, only the first one will be respected. Multiple tag values inside a header are comma separated.

Note: Avoid including personal information or any other audit-compliant data in cache tag entries.

- If there are multiple Edge-Cache-Tag HTTP response headers, only the first one will be respected. Multiple tag values inside a header are comma separated.


Q: How do Cache Tags work internally?

A: Essentially invoking a purge by cache tag involves the following steps:

  1. A user adds tags to their cacheable web assets by adding an “Edge-Cache-Tag” HTTP response header and a tag value to outbound traffic at origin
  2. A web asset is requested through Akamai
  3. Akamai associates the tag values found in the "Edge-Cache-Tag" HTTP header with content Akamai is caching
  4. The tag sits idle until the user initiates the purge
  5. Once the purge is invoked via the LUNA portal or an API call, all cached content for the specified cache tag is purged.

Q: Are cache tags hostname specific or agnostic?  I.e if I set a cache tag on my static domain content ( and the same cache tag on my primary domain content ( both will be purged?

A: Yes, for our initial beta offering, both will be purged. 



Q: What level are cache tags scoped at today?

A: Cache tags today are scoped at an account wide level. 

In lieu of this, if you have multiple groups or departments using the Purge by Cache Tag functionality within your account, we recommend following some kind of nomenclature that helps distinguish tags created by these different groups. This will primarily benefit in preventing cache tag collisions- scenario where multiple groups create the same tag and purge each other’s content. 

For example, consider an online apparel store that sells both clothes and shoes through two different departments, both of which fall within the same Akamai account. Assume both departments run a promo and tag some of their content as ‘SALE’. Now if the clothing department decides to purge all content tagged as SALE, this will also purge content tagged by the shoes department. To avoid this scenario we recommend having some kind of a nomenclature within departments- this could be as simple as every tag starting with <DepartmentName_>. In this case we would have had two different tags- CLOTHES_SALE and SHOES_SALE and hence prevented any tag collision.


- Also note that since cache tags are enabled at the account level today, they can be used across many CPCodes. In other words, the domain, CPCode, or origin configuration does note matter within an account.


Q: Are there any limits on cache tag purges?

A: Yes- cache tag purges have two limits:

      - Users cannot purge more than 5000 tags/hour (per account)

      - Users cannot purge more than 10,000 objects/min (per account)

     Exceeding either of these limits will throw an error. 


Q: Can you talk more about cache tag nomenclature?

A: When using cache tags, follow these requirements:

  • When assigning cache tags, follow these requirements:

    • A single cache tag cannot exceed 128 characters. If you enter a cache tag exceeding 128 characters you will not see an error but the specific content associated with that tag will not be purged when the purge is submitted.
    • The maximum number of cache tags per object is 128. If you exceed this, the additional tags will be ignored. 
      Note that the default Akamai max reply headers size is 8192 bytes

    Cache tag format is derived from the format of a token from RFC 7230, Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing.

    Allowed Characters

    Disallowed Characters



    Open/closed parenthesis: (( ))

    Comma (,)

    Exclamation Mark(!)

    Backslash (\)

    Hash Sign(#)

    Colon (:)

    Dollar Sign($)

    Asterisk (*)

    Percent Sign(%)

    Semicolon (;)


    Angle brackets (< >)

    Vertical Bar(|)

    Equal sign (=)


    Question mark (?)


    At sign (@)


    Braces ({ })


    Square brackets ([ ])


    Slash (/) Will be allowed in a later version.


    Space( )

    Grave Accent(`)


    • Cache Tags in Edge-Cache-Tag header must be delimited by Comma (,).


Q. How can I get into the cache tag beta?

A: Please get in touch with your account team if you are interested in this.

Getting into the beta also requires certain mandatory pre-requisites:
               a. All properties must be on Property Manager
               b. The account does not use any legacy purges during the duration of the beta (CCU v2 URL, CPCode, & ARL purges, HDPurge, SOAP API). These will work unreliably once the cache tagging environment is enabled. Note that you can continue to use Fast Purge by URL or CPCode
               c. No properties on networks other than FF or ESSL
               d. Cache tag purge cannot be used on Image Manager or Resource Optimizer

Note that all of the pre-requisites have an account wide scope – not just a contract wide scope



As communicated early this month, due to security concerns, Akamai is planning to block all V1 ARL (refer below for more information on V1 ARL) traffic over Akamai platform, including previously whitelisted V1ARL by October 4th, 2017.


Consistent with prior communications, the use of V1 ARLs is a security risk since it can be exploited by attacks such as cross site scripting, as such in 2016 Akamai disabled the use of v1 ARLs with certain exceptions, which were added to a whitelist to avoid blocking those ARL’s that were then in use (see blog post). We are currently planning to remove the whitelist on October 4th 2017, effectively blocking all the existing use of V1 ARLs.


The removal will be in two phases: First on June 19th 2017, we will update the whitelist to remove V1 ARL traffic that has already been confirmed by the customer through the Akamai representative to be ok to be removed from the whitelist, and the traffic that is deemed safe to be dropped due to churned accounts or accounts with no v1 ARL traffic (0 hits) in the past 3 months. The impacted Akamai account representatives have been communicated regarding the same. The remainder whitelist shall be removed on October 4th 2017.


If you have not already done so, please reach out to your Akamai representative immediately to analyse the V1 ARL usage and determine whether it can be blocked or should be secured via a custom configuration.

Your Akamai representative can help you with analysis and, if needed, take steps to secure the V1 ARLs.


Additional Resources on V1 ARLs:

Fast Purge (formerly known as Fast Invalidation) has now entered General Availability and is enabled for all customers. Fast Purge is a powerful way to use long TTLs on objects cached at Akamai, while simultaneously being able to quickly update the cached content. This release incorporates substantially increased performance and rate limiting of requests. Customers can now update content instantly.


If you are using our v2 API, your requests will always hit the older Content Control Utility (CCU) backend. If you are using the newer v3 API, your requests will always use the new Fast Purge backend. If you are using v3, but have explicitly turned Fast Purge OFF via the transition app, you will need to migrate back to the v2 API to keep using CCU.


Coming later this year are fast purge by CPCode and Cache Tag. The former accelerates CPCode purge times to Fast Purge times, providing a benefit if you have segmented your site by CPCode for more granular purges. The latter gives you a new, declarative way to track items in Akamai’s cache, and clear part of that cache selectively with your own rules. CPCode purge enters beta in mid-May, while cache-tags will enter beta in early July.


Where to find Fast Purge within Luna




Q: How do I get access to Fast Purge?

A: Fast Purge is now a part of all base products. All customers should have access to it regardless of contract line item.


Q: How do I access the Fast Purge UI?

A: Once logged into Luna, select Publish/Fast Purge from the mega-menu.


Q: How can I use the Fast Purge API?

A: Start by reading this API documentation. You may then begin issuing API requests after generating the proper credentials using the Manage APIs app. You may also find this blogpost about how to migrate to the new API helpful.


Q: Do I still need to use the Transition App?

A: The Transition App is no longer needed.  The V3 API will by default use Fast Purge. If you want to use CCU, please stick with V2 API.


Q: What if I have a question that is not answered here?

A: Please submit your question below.

As of 25 May 2017, the Beta of the new CPS is now open for all customers and partners. Direct customers with Web Performance products can enable Beta Channel via Marketplace in Luna today to gain access. Indirect Web Performance customers and customers of Media and Security products should work with their account teams to have the beta enabled for their contract.


Once enabled, you can access the beta in Luna, choose Configure -> Certificate Provisioning System (CPS) (Beta).

Update 30 May 2017: As of May 25, 2017, the Beta of the new CPS is now open for all customers and partners. This new version of our Certificate Provisioning System has been designed around a task-based interface which mirrors your own certificate provisioning workflows. CPS shows certificate activity happening in the system and even highlights certificates that need attention.

Features enabled in this first beta release:

  • All-new In Progress view on the landing page detailing certificates that have In Progress activity
    • new certificate requests, renewals, and modifications
    • highlights for certificate requests which need user attention
    • one-click access to certificate order status messages from our Certificate Authority partners
  • All-new Active view on the landing page detailing all your deployed certificates
    • call out of Staging versus Production deployments
  • Simplified search interface
  • All-new workflow for creating new certificates (DV, OV, EV, and third-party)
  • Download CSR for third-party certificates
  • Updated Certificate Detail view
  • Updated Network Deployment options view
  • Delete certificate functionality
  • In-line help with links to an updated online help reference
  • Faster, more responsive user experience
  • And many more features throughout the application


Features coming in a future release:

  • View DV validation tokens and status
  • View and Modify certificates (SAN modifications)
  • Upload third-party certificates

We will continue to update the application with additional features throughout the Beta. Both the new Beta application and the old application access the same back-end for certificate provisioning, so you can switch between them to access and modify all your certificates. Once the Beta is complete later this year, the old application will be decommissioned.


We encourage you to try out this new application. If you have feedback on this new beta capability, or have issues, please reach out to Akatec through your normal support channels. You can also follow this blog post to be notified when new Beta features are available.




Hello all. 


In March 2016, Akamai launched our Certificate Provisioning System (CPS) Self Service feature, enabling users to self-provision and manage their SSL/TLS certificates on Akamai’s Secure CDN.



Soon, we will launch a beta of an entirely new user experience for certificate provisioning. This new version has been designed based on your feedback we received over the last year, and focuses around a task-based interface. It shows certificate activity happening in the system and even highlights certificates that need attention.



Beta of the new CPS user experience will be open to all customers. Direct customers can enable Beta Channel via Marketplace in Luna today, and you will get access as soon as the Beta is open. Other customers should reach to their account team or Customer Care to have the Beta enabled.



Follow this blog post to be notified when the Beta is available.

Mobile App Users: The Next Generation

What does the morning of a typical mobile user look like? It's probably something like this:

  • 6:00 a.m. - Your alarm wakes you up and automatically starts increasing the brightness to your bedroom lamps. The snooze button is not an option today!
  • 7:00 a.m. - On your morning run, you track your total mileage and pace, and then share your workout details and scoreboard on Facebook.
  • 8:00 a.m. - You check your phone to make sure your train is on time, you can't be late to work!
  • 8:15 a.m. - You catch your train and check Facebook, LinkedIn, Snapchat, and your standard news apps to get up to speed.
  • 8:30 a.m. - As you get off your stop, you choose your coffee order and pay for it, so it's ready and waiting for you - no more waiting in line at Starbucks!
  • 8:45 a.m. - With coffee in hand, you walk to the office and check your office slack, Skype, and Whatsapp groups to prepare for the day ahead.
  • 9:00 a.m. You enter the office and get your day started.

So...what do all of the activities above have in common? Mobile apps. And in the first three hours of a day, it's totally normal to have interacted with 10+ apps to accomplish a variety of tasks. This is the reality of today's mobile user. To read more go here: Mobile App Users: The Next Generation - The Akamai Blog 


The State of Mobile App Performance

In our previous blog, we saw how a new generation of users are increasing the expectations of a mobile app like never before and identified the three key success criteria for mobile apps: 1) increase customer conversions, 2) drive installs and 3) increase customer loyalty. For this blog we profiled the Top 100 retail apps in the app store to explain how you can leverage Akamai features to meet the three success criteria for mobile apps.

We had three key goals in our mind when profiling these mobile apps:

  1. a) How many first party domains are going over Akamai from a mobile app
    b) How many mobile apps serve oversized images to user's devices
    c) How many mobile apps are leveraging IPv6 and HTTP/2

Here's what we learned.

API Analysis: First Party vs Third Party
A mobile app consists of two categories of APIs:
1) APIs that are critical for app load and responsible for user experience.
2) Third party APIs responsible for collecting analytics, collecting crash information, enabling ad tracking, and social media integrations. This category of APIs are not critical for user experience. Read more here: The State of Mobile App Performance - The Akamai Blog 


Javier Garza

HTTP/2 support for HTTPie

Posted by Javier Garza Employee Apr 18, 2017

This blog explains how to enable HTTP/2 support for HTTPie


HTTPie logoLearning HTTP/2 book cover

HTTPie is a great command line tool for working with HTTP-based APIs (check the "Setting Up HTTPie for Akamai" blog if you want to learn more about HTTPie).

HTTP/2 (h2) is the new version of the HTTP Protocol (check the Learning HTTP/2 O'Reilly book to learn more about h2)

Check HTTP/2 support on HTTPie

  1. Do a test HTTP request using HTTPie to an HTTP/2 enabled Website to see if your HTTPie client has HTTP/2 support already (look for the HTTP protocol version before the "200 OK" response. If you see "HTTP/1.1" you can follow the instructions below to enable support for HTTP/2). Open a Terminal window and type the command below:
$ http -h
HTTP/1.1 200 OK

Install HTTP/2 support on HTTPie

2. Install the HTTP/2 plugin for HTTPie: Open a Terminal window and type the command below:

$ pip install -U httpie httpie-http2
Requirement already up-to-date: httpie in /usr/local/lib/python2.7/site-packages
Collecting httpie-http2
  Downloading httpie-http2-0.0.1.tar.gz
Requirement already up-to-date: Pygments>=2.1.3 in /usr/local/lib/python2.7/site-packages ...
Requirement already up-to-date: requests>=2.11.0 in /usr/local/lib/python2.7/site-packages ...
Collecting hyper (from httpie-http2)
  Downloading hyper-0.7.0-py2.py3-none-any.whl (269kB)
    100% |████████████████████████████████| 276kB 2.3MB/s
Collecting hyperframe<4.0,>=3.2 (from hyper->httpie-http2)
  Downloading hyperframe-3.2.0-py2.py3-none-any.whl
Collecting h2<3.0,>=2.4 (from hyper->httpie-http2)
  Downloading h2-2.6.2-py2.py3-none-any.whl (71kB)
    100% |████████████████████████████████| 81kB 4.7MB/s
Requirement already up-to-date: enum34<2,>=1.0.4; python_version == "2.7" or python_version ...
Collecting hpack<4,>=2.2 (from h2<3.0,>=2.4->hyper->httpie-http2)
  Downloading hpack-3.0.0-py2.py3-none-any.whl
Building wheels for collected packages: httpie-http2
  Running bdist_wheel for httpie-http2 ... done
  Stored in directory: /Users/jgarza/Library/Caches/pip/wheels/68/a3/c0/1266ef4095eba35673a ...
Successfully built httpie-http2
Installing collected packages: hyperframe, hpack, h2, hyper, httpie-http2
Successfully installed h2-2.6.2 hpack-3.0.0 httpie-http2-0.0.1 hyper-0.7.0 hyperframe-3.2.0

Verify HTTP/2 support on HTTPie

3. Repeat the same request we did earlier and confirm you see "HTTP/2" in the protocol version.

$ http -h
HTTP/2 200

Have fun with HTTP/2!

Filter Blog

By date: By tag: