Intro: Malware C2 with Amazon Web Services

Researchers at Rhino Security Labs have developed a way to use Amazon’s AWS APIs for scalable malware Command and Control (C2), subverting a range of traditional blocking and monitoring techniques. By leveraging the Cobalt Strike “ExternalC2” specs, we’ve established a reliable malware channel which communicates only on a trusted cloud source–the Amazon AWS APIs themselves.

Malware Background: Why AWS API’s for C2?

Before we dive into the details, let’s break down why this type of attack is so effective against traditional security capabilities. While malware historically has used a range of protocols – such as DNS, FTP, HTTP and others – developments in packet analysis and protocol restriction has left HTTPS as the primary protocol for malware communication. Depending on the sophistication, there are only a couple of popular strategies still effective today: 1 – Malicious Domain(s) A large majority of samples simply have a malicious domain (or set of domains) the owners control – either by purchasing or compromising them – and reach out to those domains for exclusive communication. In theory, this is easy to detect as DNS logs, proxy logs, and other common IR data sources can identify communication to a previously unknown domain. DNS categorization and other tools can help facilitate this basic hunting. 2 – Trusted Cloud Service A more advanced approach is to leverage trusted cloud services for C2, such as Facebook, Dropbox, Gmail, etc. This is effective because most network security tools will only identify the destination of the traffic. Since HTTPS is enabled and further packet analysis is impossible without SSL-decryption at the web proxy, the traffic is allowed. Generally, we find this approach is successful, but a cloud service accepted in one organization may be cause for investigation in another. One such way to leverage these trusted providers is through the use of Domain Fronting, a technique where attackers can use trusted domains as redirectors to the command and control server by specifying a specific Host header. While this technique is extremely valuable, if a blue team uses SSL decryption at the web proxy, they may notice anomalous traffic being exchanged at the packet level. Sophisticated attackers work around this by creating communication channels that use the legitimate functionality of a trusted cloud service for their communication channel, surviving high-level packet introspection. An example of abusing a trusted Cloud provider would be using Google Drive as a communication channel. While we found this to be very effective in communicating in organizations with Google Suite, that’s not a universal need and drew the attention of blue teams when used in the wrong environment. Simply put, if Box.net is the only cloud storage that’s approved, domain requests to competing APIs are simply a data leakage threat and should be investigated. To summarize, blue teams have a variety of techniques at their disposal to block and detect malware, including:

• Log, Categorize and Block DNS (including previously new or uncategorized domains)

• Block specific cloud-hosting/cloud-service domains that are not used as part of the enterprise infrastructure (eg: Dropbox, Gmail, Github, etc)

• Monitor for and block connections to previously-unknown domains or IPs

• Proactively utilize threat intel feeds and aggregate the insight of multiple organizations to create a more comprehensive picture of current and likely threats

• Automate these activities for greater speed in shutting down a suspected attack By utilizing AWS API services – particularly S3 buckets – as the C2 source, we can be assured the domain will be live in all environments and subvert the prevention/detection techniques listed above.

Cobalt Strike and the External C2 Specification

For those unfamiliar, Cobalt Strike (CS) is a commercial malware platform used by both red teams and threat actors alike. Essentially CS has two components: Beacon and Team Server. The Beacon is malicious code that runs on a victim’s machine that is responsible for setting up communications with Team Server to receive and execute further commands. The Team Server is responsible for administering each Beacon that calls home for further commands from the operator. Recently Raphael Mudge (@armitagehacker), the creator of Cobalt Strike, released the specification for abstracting these external communication channels and implementing them in your own operations. Doing so reduces red team infrastructure overhead by reducing to zero the number of malicious domains and servers required to redirect to your Team Server.

Abstracting the Specification Using Frameworks

The greater security community took the external C2 specification and ran with it, creating frameworks to wrap the work by Raphael Mudge. The framework we highlight in this post is Jonathon Echavarria’s (@Und3rf10w) external_c2_framework (https://github.com/Und3rf10w/external_c2_framework). This framework was chosen for two reasons: 1. Modularity. The framework was designed such that adding modules is straightforward and requires little tinkering. The external specification is broken into an encoding and transport mechanism, making the implementation straightforward. 2. Python. The entire framework is built in Python. Python is a straightforward, easy language that interfaces with a wide variety of external services easily.

The only functionality missing from Echavarria’s framework was multi-client support. At the time of this article, the framework only supported one Beacon to one Team Server. With our fork of this repository, we expand this capability and enable for support multiple Beacons on the Team Server.

Implementation Using Amazon Web Services API

Und3rf10w’s implementation is broken up into encoding and transport mechanisms. The encoding module is used to encode the data in transit between the Beacon and the Team Server. This could be encryption or as simple as base64 encoding, which this example uses. The transport module is the way the Beacon and the Team Server communicate with each other using the external channel. The method of transport we use here is through Amazon Web Services (AWS) API. This was chosen for a few reasons: 1. AWS API is implemented easily and succinctly in python using the Boto library.

2. Developers and DevOps teams alike automate infrastructure and backups using AWS, meaning traffic from these addresses would blend in with regular workflow.

3. AWS supports a data-rich object, S3 Objects, which we’ll use to uniquely identify agents and transmit data of arbitrary length and content.

Overview of AWS S3 Buckets as a Communication Channel

The way both Beacon and the Team Server communicate with each other is through the AWS APIs by creating and deleting objects in S3. When the custom Beacon generated by the framework executes on the victim (or client) machine, it registers itself in S3 by creating an object with a unique staging key. This key is comprised of two parts – a static prefix that notifies the Team Server the Beacon is ready to be staged and receive the full payload, and a unique identifier that the underlying external C2 server can use to identify commands that need to be sent to the Beacon. The External C2 Server is responsible for translating data between your external communication channel, in this case the AWS API, and the Team Server itself. The External C2 Server will poll the S3 Bucket for the staging key and extract the agent’s unique identifier. It then forwards the request up to the Team Server to receive the full payload to execute on the client. To notify the client that a new task is ready for it to execute the External C2 Server creates a new object in S3. This object’s key is comprised of the Beacon’s unique ID generated from above and a static suffix, ‘TaskForYou’. The contents of this object contain the encoded command (encoded by the encoding module) for the Beacon to execute. After initial registration, the Beacon code will periodically poll for tasks from S3 looking for the task key corresponding to itself. It will then download and delete the object, decode the contents and execute the command. Once executed, it will encode the output of the command and create a new object in S3 with the response key and the encoded results. The External C2 Server then polls until it receives a response from the agent, pulls and deletes the object, and sends the results up to the Team Server. Below is a diagram elaborating on the process:

Shortfalls of AWS as a C2 Channel

When considering this method, it is evident that this is by no means a perfect transport module. One shortfall is that to push and pull from S3, you need to distribute your API keys to the Beacon executable. In this proof of concept these are hard coded; however, there are other ways to distribute these keys. One such method could be encrypting the AWS credentials using a transient key and publishing the encrypted credentials and transient keys on separate remote resources. Then, when the Beacon executes, it could fetch both the transient key and encrypted credentials, decrypt the credentials and begin the staging process. Further, while we tackled the problem of multi-client beacon functionality it still lacks task sequencing. Without this the client can only receive one command and submit one response at a time. The other pitfall of this technique is latency. Both client and server communicate with the external resource via polling, which means that the client will continually beacon out to the external resource looking for tasks. The polling rate of the Beacon will be predefined and immutable after compilation of the client executable. At the time of writing it is still unclear how to resolve this issue.

Demonstrating the Attack: Proof of Concept

To summarize everything, below is a short video demo of how to implement the framework on your own Team Server. In it, we demonstrate spinning up the Team Server and External C2, client execution of the payload, and finally that the beacons are not polling back to our Team Server directly by invoking netstat -a. Note: The Team Server itself is not using any redirectors, so the netstat invocation here is a good proof positive. For the fork of external_c2_framework, along with the AWS S3 transport module, see our Github:

https://github.com/RhinoSecurityLabs/external_c2_framework

``` ```

Conclusion: Improving Security Defenses