Responding to typical breaches on AWS
Alt-Title: “Your AWS account has been mining bitcoin!”
You have received a scary email from AWS Abuse. Common subject lines may include:
- “Your instances have been making illegal intrusion attempts”
- “Your instances have been port scanning remote hosts on the Internet”
- “Your AWS account is compromised”
AWS is not responsible for discovering the root cause of your breach. Their shared responsibility model makes it clear that you’re in charge.
It’s also unlikely that you would receive this email unless you were victimized by a fairly unsophisticated attack. Those have some common root causes.
You may have leaked an AWS key.
Leaked Root or IAM credentials are extremely common and are likely a top cause of AWS abuse emails. Here are a few occurrences of this in the wild, as examples:
Your Root key or IAM credentials may have been leaked to Pastebin, a public repository, an S3 bucket, an outside attacker, or other wildly unpredictable side channels.
With this access, an attack can spin up instances and mine bitcoin, flood their enemies with traffic, or further attack other AWS customer accounts using you as an intermediary. All of these behaviors will make AWS angry and deliver you an abuse message or kneecap your account altogether. In the worst case, an adversary can delete your entire company.
To investigate: This misbehavior can be discovered and understood in CloudTrail logs, if you already had them enabled. If not, you may have to ask AWS support for CloudTrail logs during the window of abuse. Luckily, AWS just announced a free retention window of seven days.
You should assume unreliability with AWS support. They may not return your logs, and they may not have them return them quickly.
Additionally, the IAM dashboard and credential report tool may have quicker leads while you are waiting on CloudTrail, or someone else takes that task.
Your goal is to identify the specific access keys that may have leaked, and what sort of modifications they may have made to your account. (Did they make new keys you are unaware of? Did they modify any security policies? Did they exfiltrate any instance snapshots with other accounts?)
AWS support will not proactively investigate to this level of depth. It is no longer surprising for me to see leaked keys create new keys or other users to retain access and continue abusing a victim, so it’s important to go down this road.
To mitigate: You would need to make any stolen access keys inactive or delete them altogether, and replace them with new ones that haven’t leaked.
Additionally, you will need to revert any backdoor keys, objects, or Lambda functions an attacker may have created, all discovered from CloudTrail logs. It’s possible that trust relationships of roles or policies were tampered with as well.
You should consider using roles, which are much more secure and convenient in many situations.
You may have server exposed with a widely known vulnerability.
If IAM credentials were strictly not involved in the breach, another common scenario may be exploitation of servers you have exposed to the internet. A significant amount of exploitation on the internet is automated, scanning the internet for known vulnerabilities to exploit.
From here, an attacker would then start their attack with access to your server. They may install mining software, begin scanning other hosts on the internet, or begin DoS’ing targets. Some of these behaviors immediately result in AWS warning emails and are easily detectable by AWS Security.
An adversary may also discover an IAM key (like the above), or access instance metadata and spin up new instances from there. If this is the case, you may need to mitigate the root vulnerability before mitigating the exposed IAM key or role.
To investigate: If the abuse email AWS has sent you indicates a specific instance as a starting point, that is as good a place as any to see if it is publicly accessible.
- Does it have a publicly routable IP address?
- What servers are running? Can they be reached by anyone on the internet?
- If unknown, can you identify these exposed services with an
nmap
scan? - Search the version number of these exposed applications and see if there are there known vulnerabilities, or patches for security issues.
- Can you reduce your servers exposure to the internet, if they aren’t publicly used?
- Are these services misconfigured? (IE, a wide open HTTP proxy, an SMTP open relay, or something usable for DNS/NTP amplification attack?)
Some examples of commonly exploited services are are exposed Jenkins interfaces that are publicly accessible. Here’s an example where Apache Struts was exploited. This is the same issue purported to be involved with the Equifax breach, and your infrastructure is no exception to the same type of issue.
These sorts of examples appear all the time, and make a situation worse when a compromised host stores other valuable credentials to be used in a CI/CD pipeline.
Collect information in case abuse within your account resurfaces.
The above scenarios are common, but not exhaustive. You’ll want to enable the following configurations to make future troubleshooting much easier.
If an incident rears its head again, this will capture more information about it.
- You want to enable Cloudtrail in all regions, preferably flowing through a CloudWatch Logs Group so you can query them quickly.
- VPC Flowlogs will help you troubleshoot unknown traffic coming to / from your EC2 hosts that might resemble the abuse. You’ll want to plan for, and monitor costs in CloudWatch as this can get noisy.
- The CloudWatch Logs agent can help you centralize
/var/log/syslog
and troubleshoot malicious access from abused SSH credentials, abuse ofsudo
, or other unknowns.
Discovering a breach does not exclude other breaches.
When you have discovered the root cause of an issue, it’s important to consider that you might not have a lone adversary who discovered and exploited it. In fact, these class of issue can attract multiple attackers due to their ease of execution.
If you or your company operates on sensitive data for customers or consumers, you may be obligated to breach notification or other duties from your contracts. You will likely need to involve legal counsel or a notable incident response firm that can perform a forensic approach to the investigation. This will increase your confidence that your breach was not more substantial than the behavior that caused the AWS abuse email to be delivered.
Your incident may simply turn out to be as simple as a casual “drive by” attack from script kiddies, and your sensitive data may not be involved which may not have larger obligations. Be very clear: You should only settle on that opinion if you are informed by substantial visibility into the incident.
Otherwise, it may be smart to seek help.
When in doubt, blow it all up.
Sometimes it can be too expensive or time consuming to completely understand a root cause and a scorched earth approach is your only option.
Just be aware that once you’ve decided to destroy resources, you will have eliminated any opportunity to fully understand your incident. You will also cut off most opportunity for an external firm to perform more sophisticated forensics on your behalf. As said before, you may have contracts or regulation that obligate you to avoid this drastic step.
At this junction, if you haven’t come away with any reasonable root cause, you may be rebuilding into a vulnerable state. Don’t make a decision like this lightly and save this course of action when you have run out of other options.
Conclusion
AWS key leakages and internet exposed services with remote code execution are responsible for the most abuse email type problems I get involved with.
This writeup was not meant to focus on preventative lessons. Those lessons become painfully clear during an incident.
But, it goes without saying that prevention improves response. Be sure to understand the concepts in the AWS best practices whitepaper, and make sure you’ve enabled your CloudTrail / VPC / CloudWatch logs for a future incident.
@magoo
I write incident response stuff on Medium.