Bad Actor or Bad Site Design? (Part I)

Blog Post created by B-3-P4FD9 Employee on Oct 2, 2014

Thinking and acting purposefully with a security first mindset is not without its challenges.   We walk this fine line with a Protect and Perform product bundle for customers that  combines the industry leading web performance of Ion and multifaceted application protection capabilities of Kona Site Defender.  Coupled with the built in protections offered transparently by the Akamai Intelligent Platform and you have unmatched capabilities to run your online business faster, safer, and more reliably than your competitors.


But now that you have all of that safe, fast, and reliable technology on your side, how can you separate out the good from the bad, a real attack from a flash crowd or marketing campaign that you may have been unaware of?  IP Intelligence by way of location identification using EdgeScape and trust though emerging IP Reputation is one breakthrough technology that Akamai is uniquely positioned to capitalize on, simply because of the volume of traffic that we see over our network on a daily basis.  Insights we attain are one reason we can effectively report on attack trending by Industry in our State of the Internet report.  IP Intelligence can score users based on their recent past behaviors.  Given the dynamic nature of the Internet this is a much more effective and safe means of handling unwanted user requests that creating and maintaining static IP blocking lists.


Take for example one end user several years ago who was complaining over social media channels that he was not able to connect to a wide swath of top retail chains.  As it turned out, his IP had been blacklisted nearly across the board in all of these customer configurations.  That means whoever was unfortunate enough to get that IP handout through DHCP, ISP allocation, or common rule set could have been literally cut off from a good chunk of well established brand name Internet site due to the behavior of someone else.  This practice of course still happens today through many Web Security vendors when botnets are identified and statically blocked (until the end of time), often at the detriment to unsuspecting end users harboring malware.  For a sidebar read on that, be sure to check out my post on The Death of Usability in the Age of Security.


Static blocking of IPs, while still serving an important facet of multi-layered protection, has long been an outdated paradigm for protecting web sites from the increasing proliferation of IPV6, to the fact that the Internet itself is too dynamic to maintain a huge block of IPs where you can reliably insure you are not impacting your own customers and your bottom line.  To illustrate just how dynamic this address assignment is, I noted in my blog post Can crowdsourcing better map the physicality of the Internet?, when "looking over a 3 week period of a sample of EdgeScape data with 10 million IP addresses for instance, the percentage delta change in physical locality ranged from .24% all the way to 5.6% week over week.”  Just to conjecture for a moment, if 5% of the entire Internet changes physical location over a span of weeks, how many end users are reassigned IPs in the same physical locality during that time span and how can you confidently block based on a static list knowing that?