
hacking attack. Here is what happened and what it took to recover it.
This weekend my servers out in the cloud space fended off a major
hacking attack across two of the systems that I have given the public
access to use them. The attack started simply on Friday night as a
simple series of scans to see if there was anything in the IP space
that I am using. This is the fairly standard attack pattern that many
information security people see every day. Thinking that this was
normal I closed up shop on Friday and went home.
On Sunday I got an alert from the system that it had hung and when
I went to go try to take a look at it using HTTP and SFTP the computer
simply would not respond, there was no way to access it. In the control
panel provided by the company I use for cloud hosting, I simply
rebooted the box thinking that it was hung on a process that was
keeping the box from being accessed. Over Sunday night, I got three
more alerts that that the box had hung.
Monday morning when I went into work I rebooted the box again (this
is a low priority box with almost no regular use over the weekend) and
dove into the error logs for the box.
Over the Saturday Sunday time period someone had seriously tried to
get into the computer. Over 250 gigs of access logs and over 300 gigs
of error logs had almost consumed the disk space that I was using. The
computer was not simply hung on a process, it had been resource starved
because during the hacking attack the hacker had hit the system so hard
that there were no more ports open to make a legitimate connection.
They had towards the end of the attack (Sunday night) hit the computers
with what looks like a simple denial of service attack at towards the
end of the attack.
My thoughts on this is that, my computer survived and came back to
operations with a simple reboot of the cloud computer to free up
resources that had been consumed during the attack. No data was lost or
stolen from the system and its role is to deliver Multi-media and
provide data back to a Learning Management System. This meant the loss
of some ability for the LMS but nothing that would have killed the
entire system.
The 300 Gigs of error logs is over kill, the assumption was that at
some point the hacker or hackers got angry enough that they could not
get into the system that they simply did a Denial of Service against
the box aiming to resource starve the system and cause problems for the
System Administrator over the weekend. I do not think they knew it was
on the cloud or that it was a simple matter of rebooting the box to
restore services.
The hacker or hackers had failed in getting into the box, which is
good, but resorted to DDOS to cause resource starvation as a final act.
I do not think we are dealing with a true professional, but I do think
we are dealing with a person who is a step above a script kiddy. They
had access to an awesome level of firepower for their DDOS, we logged
thousands of IP’s Sunday night. My belief on this one is that the
person or persons had access to a botnet or a very large number of
compromised systems to make this work.
I paid 20 cents a gig in bandwidth costs for the attack, with the
500 gigs of traffic roughly aimed at the system according to my monitor
I paid 100 dollars to my cloud service provider for bandwidth consumed
during the attack.
I only had temporary loss of one system because of the way that we
distributed the cloud architecture across multiple systems in different
data centers. As users switched over to different data centers, the
system performed as architected, people were able to get their data
over the weekend and nothing was truly slowed down or otherwise
inaccessible during the attack.
It took two hours to go through the log files on the system to see
what had happened. It took 15 minutes to generate the report to IT.
This is literally the quickest I have ever gone through an attack, with
clean up and with log analysis ever. It is also the cheapest attack I
have ever dealt with in terms of loss or dollar costs associated with
an attack ever. Which made for a fun hacking attack with a ton of data
to use in the classroom and share. The good part is that a distributed
architecture in this case worked which validates the way we built the
cloud based system with fail over in mind, not necessarily a hacking
attack induced failure of a system.
It is possible to attack a cloud computing system, and it is
possible to resource starve a cloud computer, but in the longer run
survivability and the ability to get to data relies on the architecture
that the system was initially built around. If you are building a cloud
space for your company, think in terms of survivability and fail over
if a system in your cloud space fails for any reason and how to recover
and still present data to the end user. Hacking attacks happen, and
hackers will get angry and try to DDOS your site off the planet, how
you architect your cloud space and cloud services will help you survive
hackers as well as the occasional other failures in the system.
(Cross-posted @ IT Toolbox )