If you have ever dealt with security issues associated with a
datacenter or even a single server, you will know what vulnerability
scanning means. It is the tool used by both the “good” and “bad” guys.
“Good” guys use it to scan their network or server (in the case of a
small business running a single dedicated server) to make sure that
everything is up to date without any security holes. There are
different ways in which vulnerability assessment was done but some of
the widely used scanners were very intrusive while scanning. In fact,
some of the thorough scanning brought down the entire network or
server(s) completely leading to outages. However, businesses cannot
live without such scanning. It is the only way they could stay one step
ahead of the attackers. In fact, some of the e-commerce websites and
datacenters hosting financial information were required by PCI
compliance requirements that vulnerability scans are done periodically
to stay compliant. You might have seen this in the e-commerce sites in
the form of “Verified by xxxx” logos for security and privacy of data
in those sites.
In the traditional computing world, this was not a big problem.
Enterprises owned their own datacenters and small businesses had their
own dedicated servers. Any outage during the scanning processes only
affected the businesses concerned. In the world of Cloud Computing,
multi-tenancy is fast becoming the important keyword. In this new era
of shared resources, businesses are increasingly facing the dilemma of
not being able to do the scan. They can’t do the penetration testing to
make sure that their environment is secure. In fact, most of the Cloud
providers explicitly prohibit such a scan through their terms because
any scan employed by a business has the potential to disrupt the
services for other customers.
Even if we set aside the commonsense we gained from our Security 101
courses and not bother about these pre-emptive security measures, there
are regulatory requirements we have to meet. For example, PCI (Payment
Card Industry) compliance requires that we do vulnerability assessment
while dealing with financial data like the credit card processing.
These regulatory measures should be met for the very existence of the
business itself. If the Cloud provider ban such scans explicitly
through their terms, businesses are left with only option of not moving
to Clouds. In the end they lose out because they cannot leverage the
advantages of the Clouds.
The debate on how best to handle has been going around in the security
expert community and, also, amond the “Cloud pundits”. Recently, Craig
Balding of CloudSecurity.org wrote a post offering a suggestion to deal
with this problem. I thought I will talk about his suggestion here to create more awareness on the topic and, if possible, build on top of his ideas. His solution is a simple and straight forward one.
Assuming a cloud provider with a more measured approach towards
vulnerability scanning of customer cloud infrastructure, we now need a
simple, mutually trusted mechanism to agree scan sources, rate limits
etc. Something like an “ScanAuth” (Scan Authorize) API call
offered by cloud providers that a customer can call with parameters for
conveying source IP address(es) that will perform the scanning, and
optionally a subset of their Cloud hosted IP addresses, scan start time
and/or duration. This request would be signed by the customers
API secret/private key as per other privileged API calls. The provider
receiving the request can rely on the digital signature as proof that a
scan is authorised with the associated parameters. After the provider
has processed the scan authorisation request, the provider could return
a status code approving or denying the request (with a possible reason
code to allow resubmission with more acceptable parameters). This
response can optionally include rate limits which the customer can use
to tune the intensity of their scanner.
The provider can now whitelist the customer provided scanner IP(s)
for the duration of the requested scanning window such that active
countermeasures like anti-DoS controls are not triggered, resulting in
a ‘cleaner’ scan (and hence a more accurate report).
Should the scanning activity exceed any specified limits, or
communicate with IP addresses not associated with customer virtual
machines, the provider could instantly blacklist the scanning IP or
apply traffic shaping.
In my opinion, not having a solution is not an acceptable one. By enabling access to scanning using a ScanAuth API, Cloud vendors can keep a tab on the scanning process and make sure it doesn’t bring down their service completely. Probably, a good sets of standard guidelines on the parameters to be used for the scanning will help. Still, my knowledge of security is still rudimentary compared to Craig Balding, Christofer Hoff and others and I hope to hear more from these experts on this topic. If you have a take on Craig’s suggestion or if you have any other suggestion, feel free to jump in below in the comments section. As we have always told in this space, we encourage guest bloggers to discuss different topics related to Cloud Computing. If you want to write about Cloud Security and how we can deal with issues like this, send me a line.