You saw us last with Nathan Anderson’s awesome MSFvenom tutorial. Moving on from there, I’m going back to network discovery tools that I find useful in my penetration tests. In this post, I’ll introduce masscan.
The Basics
Adam has blogged about using Nmap for discovery. Another tool for finding open ports is masscan. Masscan doesn’t give as much info as Nmap but can be great when scanning large networks or just looking for open ports, as it can go much faster. Masscan also comes with Kali, and you can read their write-up here.
I normally use masscan when given a large target list. Masscan does need to be run as root, so make sure to switch to the root user or use sudo. My normal use of masscan looks something like this:
masscan --top-ports 1000 -iL targets --rate 500
Basically, this command takes the top 1000 TCP ports and scans all the targets from the host file. The target file can be a list of networks on each line. If you’re scanning just one network, you can also omit the -iL and simply pass that network on the command line.
Useful Tips
I will show a few examples using my local little network, so not too exciting, but a good look at how to use the tool.
By examining the saved config file, we can see the top 1000 ports it scans:
There are times when I want to save the output of a masscan to a file for more processing. There are a few different formats that masscan provides: binary, XML, grepable, JSON, and list. The flags are -oB, -oX, -oG, -oJ, -oL respectively. When using the flag you just pass the filename and masscan will output in that format to the file. I normally use the list format (-oL).
If, for some reason, you need to stop an ongoing scan, you can Control + C, and masscan will pause. It will then write a config file called paused.conf, wait for 10 seconds, and then close. You can then resume the scan with –resume.
You can also modify the paused.conf file to change settings. For example, I changed the scan rate here and started the scan back up. Notice it’s now scanning at 250 packets per second instead of the original 500.
You can also write config files and have masscan run those by designating them with the -c argument, but I don’t normally do that.
You can specify which ports to scan with -p. If I’m not scanning just the top 1000 ports, then I normally scan all of them using -p 1-65535. But, if you want to know where all the https services on port 443 are or the Kerberos services on port 88 are, you can specify just those ports with -p.
Here’s an example where I’m looking for all services on ports 20-23, 443, and 80. I’m also saving these to a list:
Here you can see the output of the list:
I normally just end up grabbing the host and port with Awk. To get a list of host:ports you can use the following.
awk -F" " '{print $4":"$3}' < me.out
Maybe we will take a look at Awk and Vi at some point in a future Cool Tools post, but this gives you an idea of how useful they are.
There will be extraneous lines in the beginning and end, but those are easily removed.
There is also a way to exclude hosts and networks from a scan. Say I don’t want to scan 192.168.1.151. I can either use the –exclude flag, or, if I have a list, use the –excludefile flag.
Here we do the same scan as before, but this time I pass an exclude file with 192.168.1.151 in it, and we can see that IP doesn’t show up in the results.
You can scan UDP services as well with -pU. Here I’m scanning for just 53 as an example, but listing ports works just the same as with TCP.
Conclusion
Masscan can do much more than what I describe above, but this is a good primer. I hope you’ve enjoyed this look at masscan. Please check back for the next post in the Cool Tools series!
It’s been two years since we checked in about this, and the penetration testing industry has made a lot of changes in that time. Historically, customers could choose between vulnerability scanning and manual penetration testing. The primary challenge for the prospective customer was determining what a company was actually offering and if they were truly doing the same quality of work as other companies quoting services of similar names.
However, in the last couple of years other service variants have entered the market. Many companies are now offering Penetration Testing as a Service (PTaaS) and now AI penetration testing is gaining traction in the marketplace as well.
What does this mean for companies shopping for a penetration test? It means that, more than ever, you have to be able to sift through the sales talk and fancy service briefs to understand what the offering truly is and how it compares to the other quotes you are reviewing.
Let’s look at each of these offerings what you are likely to see:
Vulnerability Scanning
This is the basic automated scan running across your environment looking for low hanging fruit: out-of-date software, unpatched systems, common vulnerabilities that can be picked up by automated rules, default passwords, etc. While the results often include false positives or vulnerabilities that can’t actually be furthered for a true attack, a vulnerability scan is still a good place to start.
While vulnerability scanners are useful tools, they are not without limits. There are many exposures that they will not detect.
When incorporating vulnerability scanners as part of your security strategy, Raxis recommends performing authenticated scanning and assigning team members to regularly review the scans and correct discovered issues.
For customers who don’t require a penetration test for compliance or who want to add budget-friendly year-round scanning to their annual penetration test, Raxis Protect offers frequent scanning of your environment using best of breed scanning technologies. Raxis Protect is not a penetration test and does not include manual testing, but it does provide access to chat with the Raxis penetration testing team about reported findings so your team can understand and remediate them for a more secure environment.
Penetration Testing
Arguably the gold standard for determining true business risk, penetration testing uses skilled engineers to emulate the attacks of a bad actor. While penetration tests may include a vulnerability scan to discover possible entry points and reportable issues, the penetration tester will spend a majority of the time identifying gaps in the environment and looking for creative ways to exploit those vulnerabilities and to gain further access within the network or application.
The ultimate goal is to pivot to other systems, create chained attacks that escalate privilege, and ultimately gain persistent access to the environment while accessing sensitive information.
This attack simulation provides businesses with an understanding of what real-world attackers could accomplish inside their environments, allowing them to correct issues and shore up defenses. However, it is a point in time assessment that is usually performed annually or quarterly, depending on the company’s needs and budget, changes in the environment, and appetite for accepting risk.
Raxis Strike is our new name for these traditional penetration tests, and we still find many customers requesting these on a regular basis.
Penetration Testing as a Service (PTaaS)
PTaaS has evolved in our industry as a method to address the limitation of penetration testing as a point-in-time assessment. However, there is still no standard definition for PTaaS from one company to another.
Some companies consider PTaaS an “on-demand” service that kicks off a penetration test upon request. Others provide a certain number of penetration tests during a year. Still others dress up an ongoing vulnerability scan – or a grouping of automated scans and exploits – and call it PTaaS.
Finding a middle ground that allows ongoing testing without breaking the bank is the trick.
Raxis Attack is PTaaS designed to be scoped to fit your needs and budget. Attack uses best-in-class scanning for continual scanning complimented with unlimited on-demand penetration testing requests for either individual findings or for the entire environment. And of course, even though you receive results throughout the year, we can still create a PDF report of your current findings to fulfill penetration testing compliance mandates.
We believe PTaaS is about collaboration – so Attack customers also have access to ongoing chat and video conferencing with the same Raxis engineers who perform our traditional penetration tests. We view it as a “fractional penetration testing team” at your service all year long.
Ultimately, you should take a deep dive with each company to truly understand what they are offering and which level of service is the best fit for your organization.
AI Penetration Testing
Like most industries, AI is making its dent in offensive security as well. AI is often hailed as the silver bullet to all security woes. However, in reality, AI is still limited in what it can do and is very likely not the only tool you need.
Some companies claim their AI services can do everything manual testing can do and more. In our experience, these claims are often overstated. We have yet to see an AI solution that can truly think outside the box and manipulate systems the same way a seasoned engineer is able to.
We’ve also seen cases where common tools are scripted to run automatically in an environment using basic known attacks, and that offering is incorrectly branded as an AI-based solution. The concern is that many of those tools, if left unattended, can take down systems and cause other stability concerns.
If you are considering an AI solution, look closely at the solution and make sure it’s truly offering what what you need.
So, What Does this Cost Me?
In the US, vulnerability scans are still the lowest cost of the four methods we discussed above. Often running from a few hundred dollars to a few thousand, depending on the size of your environment and whether the scanning is for regulatory compliance reasons, vulnerability scanning should be an integral part of your security program, but it does not check the penetration test box for compliance, and it doesn’t give you a full picture of your security gaps.
Likewise, penetration testing pricing will differ depending on the size of the organization and the size and depth of your scope. Most reputable US penetration testing companies will start around $5,000 and go up to well into the six figures. Often these tests can be time-boxed to fit your objectives and budget.
PTaaS pricing still varies greatly. Factors such as the scope and frequency of testing, the level of manual testing vs automation, the depth of reporting and remediation guidance, and the inclusion of additional features like continuous monitoring all influence the total cost. As a general rule of thumb, the annual cost of a PTaaS subscription is often roughly the price of two traditional penetration tests, but this is only a rough estimate.
AI Penetration testing tends to be far cheaper than traditional penetration testing and PTaaS primarily because it’s an automated system with little to no human oversight. While it might technically check the box for compliance requirements, Raxis still believes AI testing is best used as a supplemental tool to other forms of adversarial attack simulation. From what we’ve seen, AI testing can start as low as $500 in some instances and increase from there.
In Summary
Whatever method you choose for your organization, to accurately assess pricing and select the best solution for your organization, start by discussing your specific requirements with multiple providers and requesting detailed proposals outlining their approaches, deliverables, and associated costs. This will allow you to compare the offerings and to choose the one that strikes the right balance between coverage, quality, and value for your needs and budget.
Before starting any penetration test, you must know the scope of the work. Depending on the type of assessment, this could be specific hosts or full ranges of IP addresses with live hosts scattered throughout. If the scope is the latter, then it is a good idea to initially identify which hosts are live and then discover common, known vulnerabilities on the hosts. This will narrow down the attack surface and potential attack vectors to help establish a list of priority targets during the assessment.
There are several tools that I like to use when I start a new assessment. Some are free open-source tools, while others are commercial.
Open-Source Tools
Nmap
The first tool I always use in Nmap. Nmap is an open-source tool that can identify live hosts and services by conducting protocol enumeration across systems. It can also be used for specific configuration checks or even as a vulnerability scanner by using the Nmap Scripting Engine (NSE).
Even when I use Nmap with vulnerability scanners, I still regularly utilize Nmap scripts. Nmap come pre-installed with Kali Linux but can easily be installed on any Linux or Windows system. Using Nmap for the first time can be intimidating, but after using it for a bit, users often find it very easy and reliable. I usually run the initial scans by ignoring ICMP checks (Ping) and just assume that all hosts are live. I do this because some network admins like to disable ICMP on live hosts as a security measure (Good on ’em!).
nmap -v -A --open -Pn -iL targets.txt -oA nmap_scan
Note: -Pn disables the ICMP check.
If the scope is extremely large and the Nmap scans won’t complete in the time allowed, I enable the ICMP check by remove the “-Pn”:
nmap -v -A --open -iL targets.txt -oA nmap_scan
Below is a screen shot of a typical initial scan I perform on network assessments (internal & external):
The commands above are easy to learn once you use them a few times, and I’ll cover what is going on here.
First, I like to use the “-v” which enables verbose outputs.
The “-A” enables OS and version detection, as well as script scanning and traceroute output.
“-Pn” disables ICMP ping for host discovery, causing the scan to assume that all hosts are live.
Next, we have the IP range, in this instance a standard /24 internal network. If I have specific hosts or multiple ranges to target, I will create a text file and use the “-iL” switch to point to the text file.
Lastly, I like to output the results to all supported formats by setting “-oA.” The reason I do this is because I like to ensure I have the different file types for future use. For example, the .nmap output is easy to read when I want to scan through the output. I typically use the XML output for importing into Metasploit or when using Eyewitness to enumerate web hosts.
There are quite a few good cheat sheets out there too if you let the Googles do the work for you. If not, follow this series for more Nmap tips and tricks in the future.
Masscan
Another tool similar to Nmap is Masscan. Masscan is a TCP port scanner that scans faster than Nmap. Unlike Nmap, Masscan requires you to define the ports you want to scan. I typically don’t use Masscan and just stick with Nmap, but it’s a good option if you want to quickly look for specific open ports on a network.
OpenVas
Another tool I use on occasion is OpenVAS (a.k.a. Greenbone), a vulnerability scanner. Greenbone offers a commercial version, but there is an open-source version available as well. I’ve used this many times when I am unable to access my usual vulnerability scanners. One downside to the tool is that It can be difficult to install. I recently followed the instructions for installing with Kali and ran into quite a few issues with the install. I’ve also noticed that the initial setup can take some time due to the signatures that are downloaded. If purchasing a commercial vulnerability scanner is too expensive, then Greenbone is definitely worth looking into, despite its challenges.
Commercial Tools
Nessus
By far, my favorite vulnerability scanner is Tenable’s Nessus. There is a free version available, but it’s limited to 16 IP addresses. If you are doing a lot of vulnerability scanning or time-boxed penetration tests, then it might be worth looking into purchasing this.
The thing I like most about Nessus is how I can sort by vulnerabilities across all hosts in a given scan. I can also sort these vulnerabilities by risk rating, which helps me narrow down critical or specific vulnerabilities to exploit or signs that a host or service may be a high priority for a closer look.
When viewing Nessus results, never ignore the informational findings. They often provide clues that more may be going on on a host or service than you realize at first glance.
Nexpose
Another great vulnerability assessment tool is Nexpose, owned by Rapid7. Nexpose is similar to Nessus, as it provides similar vulnerabilities. There are some slight differences in the way the products display results.
Nexpose is built around “sites.” Each site has defined hosts or IP ranges under it. From there each host’s vulnerabilities will be listed. The easiest way I’ve found to list out all vulnerabilities is to create a report from the site I’m working in.
Besides greater extensibility, one major advantage with Nexpose is that it ties in with Rapid7’s vulnerability management product, InsightVM. If you’re looking for a full vulnerability management solution and not just a vulnerability scanner, Nexpose is a good option to check out.
There are many other tools that I use, but these are always my first go to tools to start an assessment.
Follow the series!
Stay tuned for more posts in the Cool Tools series, where the Raxis penetration testing team will highlight some of their favorite tools and explain how to get started with them.
A few years ago, Mark Zuckerberg of Facebook and Tesla’s Elon Musk feuded very publicly over whether artificial intelligence (AI) would be the key to unlocking our true potential as humans (Zuck) or spell doom for our species and perhaps the Earth itself (Musk). Apparently, neither of them accepts the notion that AI, like all technology, will expose us to new concerns even as it improves our lives.
Meanwhile, here in reality, business owners are still facing the less dramatic, but more urgent threat posed by all-too-human hackers. Most seek money, some are in it for fame, others cause havoc, and many want all of the above. It’s against this mob that Raxis is using AI, more accurately called machine learning, to give honest companies an upper hand in the fight.
Here’s how it works:
Human Talent Sets Us Apart
Raxis has built an incredible team of elite, ethical hackers, who are all the more effective because of their diverse backgrounds and skillsets. Our process starts with a traditional penetration test. Based on our customers’ parameters, we set our team to work testing your defenses.
Think of this like your annual physical at the doctor’s office (without the embarrassing paper gown). Our goal at this stage is to find ways into your network and determine where we can go from there. Once you know where you’re vulnerable, you can remediate and feel more comfortable that your defenses are solid.
AI Extends Human Capabilities
One major challenge of staying cybersecure is that new threats are emerging and new vulnerabilities are being discovered even as I write this sentence. The point of continuous penetration testing is that we leverage technology to account for the pace of this change. Our smart systems will continually probe your defenses looking for new weaknesses – with software that is updated in real time.
In keeping with my earlier example, this part of the process is analogous to wearing a heart monitor and having routine blood work. As long as everything remains normal, we let the system do its job.
Humans Enhance AI Effectiveness
When our AI discovers an anomaly, it alerts our team members, who quickly determine if it’s a false positive or a genuine threat. If it’s the former, we’ll simply note it in your Raxis One customer portal, so you don’t waste time chasing it down. The latter, however, will trigger an effort by our team to exploit the vulnerability, pivot, and see how far we can go.
Consider this exploratory surgery. We know there’s a problem, and we want to understand the extent so that you can fix it as quickly as possible. That’s why we give you a complete report of the vulnerability, any redacted data that we were able to exfiltrate, and storyboards to show you how we did it.
More than a Vulnerability Scan
If you’re familiar with vulnerability scanning, you’ll immediately recognize why the Raxis Continuous Penetration Testing is different . . . and better. Ours is not a one-and-done test, nor is it a set-it-and-forget-it process. Instead, you have the advantage of skilled penetration testers, aided by technology, diligently monitoring your network.
AI isn’t ready to change the trajectory of the human race just yet, but it is improving our ability to protect the critical computer networks we rely on.
If you’d like to learn more, just get in touch, and we’ll be happy to discuss putting this new service to work for your company.
“Even after 10 years as CEO of Raxis, I’m still amazed at the wildly different pricing attached to a vast and loose array of services broadly labeled as penetration tests. I know something is amiss when I see quotes range from $1,500 to $18,000 per week (and more) for what is ostensibly the same work.“
Mark Puckett, Raxis CEO
I understand and appreciate the confusion among companies large and small who need this essential service, but who have no good way of knowing whether they’re getting a bargain or being fleeced. That’s why, once again, I’m stepping in with a straightforward guide that I hope will be helpful to any business that needs to test its cyber defenses against professional, ethical hackers.
Rules of Thumb
As with other professional services, the adage that you get what you pay for does hold true in penetration testing. A quote that’s a lot lower than reputable competitors is cause for skepticism, as is one that is substantially higher. Fortune 500 companies may well need multi-month testing that can cost $250,000 and more, but it’s unlikely smaller firms do.
There are some external factors that can affect the price of a penetration test – whether it’s conducted onsite or remotely, or whether it’s a remedial test. For the purposes of this guide, we’ll stick with just the testing itself.
The scope is everything. Be certain that what you discuss is exactly what you are getting for the price. Consider carefully what you’re trying to achieve. Are you serious about preventing hackers from hijacking your network, stealing your data, or holding it for ransom? That’s a different objective than simply checking a box to comply with laws or industry regulations.
The Differentiators
When you’re considering a company to perform a penetration test, there are three major factors you should take into account alongside cost. They are the skillsets of the pentesters who will actually be doing the work, the time they propose to spend doing it, and the methodology they will employ in the process. Reputable firms like Raxis will spend the time necessary upfront to make certain you know what to expect before you sign a contract.
Skillsets
Not all pentesters are the same, and the pricing should reflect that. For example, some testers are phenomenal at hacking Windows systems, but are not nearly as strong when it comes to Linux or mobile technology. Many environments have a mix of Linux, Windows, Android, iOS, Mac OS X, Cisco IOS, wireless networks, and others. Make certain that the team you select is well-versed in all the technologies you have in scope so they can perform a valid test.
To reiterate, you should know exactly who will be doing the testing. Just because a company is based in the US does not mean that its testers are. Some firms cut costs by using un-credentialed, offshore teams to do their work. That may or may not be a cause for concern if it’s disclosed upfront. If it’s not, however, I would consider that a big red flag.
Time Dedicated to the Job
The amount of time penetration testers spend on jobs can really vary, and that will influence the price. Some pentesters believe that a week, two weeks, or even months are required to get a comprehensive test completed on your network. There are no hard and fast rules here, but the time should be proportional to the challenge.
If you’re quoted anything less than a week, I would hope that it’s an extremely small scope of just a handful of IPs with a few services running on them. Otherwise, I’d be skeptical. The key here is to make sure the time spent on the job makes sense with what is in your scope. Also, remember to factor in complexity. For example, a single IP with a large customer facing web portal with 10 user roles will take a lot more time than 250 IP addresses that only respond to ping.
Interestingly, I’ve heard that some of our competition is taking a week or more to quote the job. I don’t understand the reasoning behind delays during the sales process, but I certainly recommend taking that into account when you make your decision.
Methodology
There are a few different ways to conduct penetration tests. I’ve broken them down into three types for reference.
Type A – The “Breach and Reach”: This type of test starts with the low hanging fruit and gains access as quickly as possible, just as a malicious hacker would. The goal is then to pivot, gain additional access to other systems, ensure retention of the foothold, and finally exfiltrate data. This is a true penetration test and demonstrates exactly what would happen in the event of a real-world breach. Some companies call this a “deep” penetration test as it gains access to internal systems and data. It’s the type of test that we prefer to do and what I would recommend as this is what the real adversarial hackers are doing.
Type B – The “360 Lock Check”: This test searches for every possible entry point and validates that it actually is exploitable. This validation is most often completed by performing the exploit and gaining additional privileges. The focus with this type is to find as many entry points as possible to ensure they are remediated. The underlying system might be compromised, but the goal is not to pivot, breach additional systems, or to exfiltrate data. This type is often useful for regulatory requirements as it provides better assurance that all known external security vulnerabilities are uncovered.
Type C – The “Snapshot”: This is more of a vulnerability scan in which the results from the scanner report are validated and re-delivered within a penetration testing report. Many lower-cost firms are calling this a penetration test, but that is not what it is. However, it may be all you need. If so, Raxis offers a vulnerability scan on an automatic, recurring basis, and it is very useful in discovering new security risks that are caused by changes to the environment or detecting emerging threats. (And no, it’s not a pen test.)
If you’re serious about security, I strongly recommend testing that ensures both Type A and Type B are part of the scope. That means testing even a small range of IPs can often take a week. However, it is the most comprehensive way to pentest your environment and meet regulatory requirements as well. Type A tests for the ability to pivot and exfiltrate information. Type B will get you that comprehensive test of any vulnerabilities found to ensure that you’re fixing real issues and not false positives.
In an upcoming post we’ll tell you about a new capability offered through Raxis One. This is pentesting that leverages the power of AI to extend and enhance the capabilities of our talented team of elite ethical hackers. Raxis Pentest AI is considered a hybrid of all three types of testing mentioned above in that it combines continual monitoring and random testing to find and exploit vulnerabilities as soon as they appear.
The most important advice I can offer anyone shopping for penetration testing services is to ask a lot of questions. High-quality, reputable companies like Raxis will be happy to put experts on the phone with you to provide answers. You can draw your own conclusions about any who won’t.
Ever wonder if hackers sit around talking about phishing expeditions and the “big one” that got away? The big one, of course, being a huge cache of sensitive data.
According to research from Proofpoint, those conversations probably don’t happen nearly as often as they should. That’s because 75% of organizations around the world experienced a phishing attack in 2020, and nearly 75% of attacks aimed at US businesses were successful. Sadly, not many actually get away.
What makes this stat even more concerning is the same report found that 95% of organizations claim to deliver phishing awareness training to their employees. That tells me the training isn’t being validated with the type of rigorous testing it takes to make sure it’s working.
To make sure we’re all clear on terminology, phishing is the practice of sending emails pretending to be from reputable companies or people in order to entice an individual to reveal information such as passwords or sensitive data. Verizon’s 2020 Data Breach Investigations Report found that phishing was the second-leading threat action behind security incidents and the top activity that led to data breaches.
As a lead penetration tester at Raxis, I work with our clients to figure out what type of test they need, and then I customize a phishing attack designed to trick their employees and even their spam and virus filters, depending on the scope. Just like the blackhats, I’ll use any trick I can to get employees to give me their credentials or click on a malicious link. Unlike my unethical counterparts, however, all my phishing is catch-and-release.
In today’s video, I share some of my favorite tips and tricks for phishing assessments. The reason I’m happy to show you how is because, the more realistic the testing, the better prepared companies are when the bad guys come calling.
Phishing attacks are a significant threat to all organizations, no matter the size. It is important that members in your organization are up to date on security training, know how to spot the most common phishing scams, and understand the safeguards in place to help protect them and the company.
Raxis offers a variety of cybersecurity services, such as penetration testing, red team assessments, and other ethical hacking solutions, to help companies take a proactive approach to improve their security posture.
I’m excited that NTEN, the Nonprofit Technology Network with more than 50,000 community members, invited me to be a guest blogger this week.
It’s important that nonprofits avoid the mindset that leaves so many businesses vulnerable. Specifically, I’m talking about the idea that they are too small or have too little money to be of interest to scammers, hackers, and other cyber criminals. The truth is that the bad guys often don’t discriminate, and you may have something they want more than money.
For example, if you keep detailed donor records, that personally identifiable information (PII) might be devastating in the wrong hands. A skilled hacker or social engineer can do a lot with names, email addresses, and phone numbers alone. Add in Social Security numbers or bank account information, and you may be sitting on a gold mine for a malicious actor.
Some of today’s most serious threats are driven by political goals more than financial interests. In many cases, these are well-financed state-sponsored attacks, and your organization may have data that can help them breach a government agency’s security or social engineer their way into a large corporation.
And forget about hackers leaving you alone because of your mission. After my years in the cybersecurity sector, I’m no longer shocked at how low some of these black hats will go. We’ve seen hospitals and health care organizations, charities, churches, schools, and other do-good groups fall prey.
The saddest part is knowing that some of these attacks were successful for the very reasons the organizations thought they would never happen. To a hacker, the idea that you’re too small to notice may mean they see you as an easy target, even if you’re only one step toward their larger goal.
If you’re a nonprofit leader, board member, or even a volunteer, please take a moment to check out the article above. You may find some nuggets that will help you help your organization avoid a breach. And that may be the most important contribution you can make to your favorite cause.
Good morning high-powered CSO, this is Brian with Raxis. I sent over a draft of the most recent assessment report as you requested. Just to recap, there were seven critical findings, 5 severe, and a menagerie of others that we can discuss at your convenience.
You’re the CSO of a major enterprise. You’ve hired us to perform a penetration test, and the results aren’t pretty. What now?The team at Raxis brings a rich depth of experience in articulating risk to all audiences. We can talk technical with the engineering groups and discuss strategy with C-level executives. It’s how someone deals with adversity that defines them as a leader. We’ve seen leaders emerge from the fire to rise above the fray. We’ve also witnessed the fallout when leadership decisions are made in ignorance or for political expediency.A security assessment is only one step in a process, and its value is largely determined by what happens after we’re out of the fray, so to speak. So, there you are; you have an unexpectedly thick and verbose penetration test report sitting in your inbox. Here are the 5 worst things you can do, based on what we’ve seen happen in the real world.
5. Sweep it Under the Rug
It may be tempting to just quietly file that report away because you think it might tarnish your reputation or because, “that only happens to other companies.” Maybe you would prefer to fix the findings with minimal political overhead. Here’s the problem. The report you received is a bona-fide disclosure of risk. When it landed in your inbox, all level of plausible deniability left the building. If you do anything less than boldly embrace it, and the company is breached, you are going to be in a rough spot with tough questions to answer. By owning the problem, you can own the resolution. Let that be the focus.
4. Play the “Blame Game”
It is easy in the world of corporate culture to get bogged down in political maneuvering. A corporate leadership role requires a certain level of posturing, but there are few things less productive than finger pointing. The fact that you were against rolling out the vulnerable application or platform that was compromised may carry weight in your inner circle of colleagues, but your stockholders only want to know their investment has underlying value and that effective leadership is at the helm. It’s helpful to acknowledge mistakes and learn from them, but keep the emphasis on moving forward.
3. Cling to Penny Wise and Pound Foolish Remediations
The findings in the assessment report are not a checklist to be ticked off and call it a day. Yes, of course they should be fixed, but it’s critical to understand that a penetration test is opportunistic. The findings that are presented are probably not the only significant exposure. Look at the bigger story that they tell and formulate remediations at a systemic level. Were all the findings related to applications? If so, the problem probably is not the applications but more likely with their development and deployment. Yes, fix the symptom, but do not neglect the underlying problems that led to it.
2. Rain Down Fire and Wrath
We see this far too often. A phishing email is sent out, and an employee clicks on the link, which then becomes the bridgehead for a compromise. When the report is delivered, specific individuals are identified as the source of the compromise and are promptly fired. That is absolutely the wrong course of action. Look at it this way. Once they understand the ramifications of their action, that person is the most secure person in the company at that moment. It’s likely that they will continue to operate under a heightened level of vigilance and will be the last person to click on a suspicious link in the future. Replacing them with someone who has not learned that lesson, simply presses the reset button for future phishing attacks. Help them understand the attack and how their actions contributed, and they may become a power advocate among their peers for better security.
1. Silo Solutions
Would a capable attacker limit themselves to a single application, network, or technology? The answer, of course, is that they would not. Lateral movement is a huge component of privilege escalation. It’s important to scrutinize specific elements of any environment. We conduct assessments regularly against a single application or system, but what we always try to underscore is that rarely are attacks vertical. Rather, the attack chain tends to zigzag across technologies and business units within an organization. Just because you tested and remediated a specific web application does not mean that the app no longer presents a risk. It means that the direct exposure created by the app has been mitigated. Maybe there is another vulnerable application running on the same server that can be used as a point of compromise?The point is that attackers do not silo their efforts, so don’t silo your defenses.
Your Decisions Make the Difference
Fortunately, these observations are more the exceptions than the rule, but they do happen. And they happen in surprising large and mature organizations. Most of these mistakes can be attributed to a knee-jerk moment of self-preservation. When our lizard brain steps in, sometimes we don’t make the best decisions for our career.The best way to avoid these pitfalls is to never put yourself in that situation in the first place. Yes, some pentests are horrific. In leadership, it’s not how you fall. It’s how you rise above.
A security assessment is not a chance for someone to make you look bad. It’s a learning exercise. Embrace it and use it for a platform from which to build positive change..
The Picts were a tribal culture in northern Scotland that history has relegated to the realm of myth and enigmatic legend. Largely forgotten, the Picts fought off the military superiority of Rome’s army and built a sophisticated civilization on the whole before disappearing from history. These were a people dismissed by the advanced thinkers of the day as unimportant and trivial in their capabilities, only to rise up unexpectedly to great effect. What does this have to do with Security? Nothing really, unless your data center is staffed by Roman centurions on horseback. If that’s you, then I am ripe with envy. But all levity aside, so it was with the Picts, it is today with the humble and unassuming TCP Timestamp.
What is a TCP Timestamp?
If you’ve ever run a vulnerability scan, you’ve probably seen a low or informational severity finding associated with TCP Timestamp responses. The recommendation is always to disable them, but rarely is any background information provided. What are those little timestamps doing there and who really cares anyway? Like the Picts, much lies below the surface, and we dismiss them at our peril. Before we can get into the ramifications of this misunderstood protocol option, we must understand the mechanics behind TCP Timestamps, and what they actually are. The basis of TCP is that it is a stateful, reliable means of sending and receiving IP packets. In order for reliable communications to take place, there must be bidirectional communication between the sending and receiving nodes so that, in basic terms, the sender can know that the target system received the communication correctly, and the receiving node has confidence that the message it received was correct. To such ends, TCP communications are session-based, and the two nodes employ features in the protocol as a framework to manage the reliability of communications. This involves things like resets, syn-acks, re-transmissions, and the like, that you’ve probably seen in any number of network captures. TCP was designed to communicate reliably over any transmission medium at any speed; it provides the same level of communications integrity over dial up as it does on a LAN.
It is important to understand that TCP was originally designed to overcome the challenges of unreliable communication channels. Not much thought was given to excessively reliable and fast communications.
It seems counter intuitive, but, because TCP is synchronous and keeps track of packets, it can break down over high bandwidth connections. It sounds crazy, I know, but let’s look at how this might happen.
Grab your pocket protector here, folks. We have to get a little nerdy.
If you asked President Trump about packet loss on TCP communications, he might respond, “It’s bad, very, very, very bad.” And he would be correct. But packet loss does occur for any number of reasons, and TCP maintains reliability by using selective acknowledgments to tell the sending node what TCP segments are queued on the receiving node and what segments it is still waiting for. These segments are, like anything else in network communications, numbered in finite sequence numbers. This value occupies a 32 bit space and exists within the confines of a Maximum Segment Lifetime (MSL), which is enforced at the IP level by something we’re more familiar with, the Time to Live (TTL). This MSL is usually adjusted based on the transfer rate so that faster speeds have smaller MSLs. This works pretty well until we introduce things like fiber optics. The bandwidth on a fiber connection can be so high that a TCP session can exhaust all of its sequence numbers and still have segments queued up in the same connection, leading to sequence number reuse, aka TCP Wrapping. This causes problems.
In short, as things get faster, it becomes more error prone to use timeout intervals to manage reliability.
The number of sequence numbers can not exceed the 32 bit value of 4,294,967,295 so as transmission speed increases, the MSL values must get shorter to compensate. With enough bandwidth, they can shrink to the point that they are no longer able to provide message integrity. If only there was a way to identify whether packets were dropped based on actual timing rather than a sequence number, but how?
Behold the mighty TCP Timestamp!
TCP Timestamps are an important component of reliable high speed communications because they keep TCP from stumbling over its own sequence numbers!Officially, this benefit is referred to adoringly as “PAWS” or Protection Against Wrapped Sequence Numbers. PAWS operates within the confines of a single TCP connection under the assumption that the TCP timestamp value increases predictably over time. If a segment is received with an older timestamp than one that was expected, it’s discarded. In doing so, PAWS protects against sequence numbers being reused in the same connection. It’s worth pointing out that there are a lot of weird exceptions and math around how that actually takes place, but for purposes of a blog post, we’ll steer clear of that rabbit hole. It should be clear at this point that TCP Timestamps serve a purpose in network communications, and that disabling them as a standard practice is a perilous endeavor.
Still awake? Good! Now we can talk about security!
Strictly speaking, TCP Timestamps are no more a security risk than the TCP protocol itself. Why then are they subject to all the bad press and mob calls for their disablement? The security concerns arise with the underlying mechanisms that are used to populate the values within the timestamp option itself. As the name suggests, the timestamp makes use of a virtual “timestamp clock” in the sender’s operating system. This clock must approximate real time measurements in order to remain compatible with other RTT measurements. There is no requirement for the timestamp clock to match the system clock, but it often maps to it for ease of design. After all, why make a new clock if you can just use the existing clock to derive values to be presented as those belonging to the new clock?
This leads to the one thing for which we hackers profess a deep and undying love for, unintended and predictable behavior. Am I right?
By measuring multiple timestamp replies, we can determine what the clock frequency is of the target system. The clock frequency is how many “ticks” the timestamp clock increments per unit of real time. For example, if we measure 5 timestamp replies each 1 second apart, and each timestamp value increases by 100 with each reply, we can infer that the clock increments 100 ticks for every second of real time processed by the system clock.Since most (not all!!) clocks start at 0, we can compute the approximate uptime of the system. Using the above example, if the timestamp value is 60000, we know that each 100 ticks of that value equate to one second of real time. We can assume that 600 seconds have elapsed since the clock was started. In laymen’s terms, the system was rebooted 10 minutes ago. We’re using small whole numbers here for purposes of illustration, but you get the idea.
In the real world, accurately fingerprinting the system will help with establishing timestamp validity, since clock specifications are documented.
System uptime by itself is an arbitrary value and doesn’t give away much on it’s own. But consider that patch cycles almost always include mandatory reboots. By surveying these values over time, one might be able to determine patching intervals by correlating reboot times across systems. What if you surveyed the same IP address multiple times and received different but consistently disparate values in the timestamp responses? That might allow you to identify systems that are behind a NAT or a load balancer, and may even allow you to draw conclusions about the load balancing configuration itself. Suppose you were testing a customer for susceptibility to DOS attacks. You may be able to use timestamps to determine with certainty whether the target system was knocked over, or whether you were just shunned by the IPS.
TCP Timestamps grant the hacker insight into a given system’s operational state, and how we use that information is limited only by our imagination.
But to dismiss their presence as a low severity security finding just to be remediated is inappropriate, and it may do more harm than good. When to use TCP timestamps should be determined by operational requirements, not by blanket assumptions of their importance. Are you listening, vulnerability scanners?So next time you find yourself knee deep in informational findings from a vulnerability scan, put your ear close to a network cable. It’s possible you may hear the faint battle cry of a forgotten hero. Want to keep reading? Learn the difference between vulnerability scans and penetration tests, more about Vulnerability Management, and how a penetration test can help you understand which risks are most relevant to your network.
In Raxis penetration tests, we often discover IKE VPNs that allow Aggressive Mode handshakes, even though this vulnerability was identified more than 16 years ago in 2002. In this post we’ll look at why Aggressive Mode continues to be a vulnerability, how it can be exploited, and how network administrators can mitigate this risk to protect their networks and remediate this finding on their penetration tests.
What is an IKE VPN?
Before we get into the security details, here are a few definitions:
Virtual Private Network (VPN) is a network used to securely connect remote users to a private, internal network.
Internet Protocol Security (IPSec) is a standard protocol used for VPN security.
Security Association (SA) is a security policy between entities to define communication. This relationship between the entities is represented by a key.
Internet Key Exchange (IKE) is an automatic process that negotiates an agreed IPSec Security Association between a remote user and a VPN.
The IKE protocol ensures security for SA communication without the pre-configuration that would otherwise be required. This protocol used by a majority of VPNs including those manufactured by Cisco, Microsoft, Palo Alto, SonicWALL, WatchGuard, and Juniper. The IKE negotiation usually runs on UDP port 500 and can be detected by vulnerability scans.There are two versions of the IKE protocol:
IKEv2 was introduced in 2005 and can only be used with route-based VPNs.
IKEv1 was introduced in 1998 and continues to be used in situations where IKEv2 would not be feasible.
Pre-Shared Keys (PSK)
Many IKE VPNs use a pre-shared key (PSK) for authentication. The same PSK must be configured on every IPSec peer. The peers authenticate by computing and sending a keyed hash of data that includes the PSK. When the receiving peer (the VPN) is able to create the same hash independently using the PSK it has, confirming that the initiator (the client) has the same PSK, it authenticates the initiating peer.
While PSKs are easy to configure, every peer must have the same PSK, weakening security.
VPNs often offer other options that increase security but also increase the difficulty of client configuration.
RSA signatures are more secure because they use a Certificate Authority (CA) to generate a unique digital certificate. These certificates are used much like PSKs, but the peers’ RSA signatures are unique.
RSA encryption uses public and private keys on all peers so that each side of the transaction can deny the exchange if the encryption does not match.
In this post, we are discussing the first phase of IKEv1 transmissions. IKEv1 has two phases:
Establish a secure communications channel. This is initiated by the client, and the VPN responds to the method the client requested based on the methods its configuration allows.
Use the previously established channel to encrypt and transport data. All communication at this point is expected to be secure based on the authentication that occurred in the first phase. This phase is referred to as Quick Mode.
There are two methods of key exchange available for use in the first IKEv1 phase:
Main Mode uses a six-way handshake where parameters are exchanged in multiple rounds with encrypted authentication information.
Aggressive Mode uses a three-way handshake where the VPN sends the hashed PSK to the client in a single unencrypted message. This is the method usually used for remote access VPNs or in situations where both peers have dynamic external IP addresses.
The vulnerability we discuss in this article applies to weaknesses in Aggressive Mode. While Aggressive Mode is faster than Main Mode, it is less secure because it reveals the unencrypted authentication hash (the PSK). Aggressive Mode is used more often because Main Mode has the added complexity of requiring clients connecting to the VPN to have static IP addresses or to have certificates installed.
Exploiting Aggressive Mode
Raxis considers Aggressive Mode a moderate risk finding, as it would take a great deal of effort to exploit the vulnerability to the point of gaining internal network access. However, exploitation has been proven possible in published examples. The NIST listing for CVE-2002-1623 describes the vulnerability in detail.A useful tool when testing for IKE Aggressive Mode vulnerabilities is ike-scan, a command-line tool developed by Roy Hills for discovering, fingerprinting, and testing IPSec VPN systems. When setting up an IKE VPN, ike-scan is a great tool to use to verify that everything is configured as expected. When Aggressive Mode is supported by the VPN, the tool can be used to obtain the PSK, often without a valid group name (ID), which can in turn be passed to a hash cracking tool.If you use Kali Linux, ike-scan is included in the build: We can use the following command to download the PSK from an IKE VPN that allows Aggressive Mode:
ike-scan -A [IKE-IP-Address] --id=AnyID -PTestkey
Here is an example of the command successfully retrieving a PSK:The tool also comes with psk-crack, a tool that allows various options for cracking the discovered PSK.Because Aggressive Mode allows us to download the PSK, we can attempt to crack it offline for extended periods without alerting the VPN owner. Hashcat also provides options for cracking IKE PSKs. This is an example Hashcat command for cracking an IKE PSK that uses an MD5 hash:
Another useful tool is IKEForce, which is a tool created specifically for enumerating group names and conducting XAUTH brute-force attacks. IKEForce includes specific features for attacking IKE VPNs that are configured with added protections.
What VPN Administrators Can Do to Protect Themselves
As Aggressive Mode is an exploitable vulnerability, IKE VPNs that support Aggressive Mode will continue to appear as findings on penetration tests, and they continue to be a threat that possibly can be exploited by a determined attacker.We recommend that VPN administrators take one or more of the following actions to protect their networks. In addition, the above actions, when documented, should satisfy any remediation burden associated with a prior penetration test or other security assessment.
Disable Aggressive Mode and only allow Main Mode when possible. Consider using certificates to authenticate clients that have dynamic IP addresses so that Main Mode can be used instead of Aggressive Mode.
Use a very complex, unique PSK, and change it on a regular basis. A strong PSK, like a strong password, can protect the VPN by thwarting attackers from cracking the PSK.
Change default or easily guessable group names (IDs) to complex group names that are not easily guessed. The more complex the group name, the more difficult of a time an attacker will have accessing the VPN.
Keep your VPN fully updated and follow vendor security recommendations. Ensuring software is up to date is one of the best ways stay on top of vulnerability management.
At Raxis we perform several API penetration tests each year. Our lead developer, Adam Fernandez, has developed a tool to use for testing JSON-based REST APIs, and we’re sharing this tool on GitHub to help API developers test their own code during the SDLC process and to prepare for third-party API penetration tests.This code does not work on its own… it’s a base that API developers can customize specifically for their code. You can find the tool at https://github.com/RaxisInc/api-tool.
Here’s a basic overview of the tool from Adam himself:
The Raxis API tool is a simple Node.js class built for assessing API endpoints. The class is designed to be fully extensible and modifiable to support many different types of JSON-based REST APIs. It automatically handles token-based authentication, proxies requests, and exposes several functions designed to make it easier and faster to write a wrapper around an API and associated test code for the purposes of a penetration test. This tool is not designed to work on its own, but to serve as a building block and quickstart for code-based API penetration testing.
Ask any penetration tester at Raxis, and they’ll tell you that we see insecure transmission of private data, often in the form of usernames and passwords, on many of the web application and external penetration tests we perform. There are a number of easily discoverable issues in play that can assist an attacker in gaining access to private information and internal systems. These vulnerabilities are a part of A3-Sensitive Data Exposure in OWASP’s 2017 Top 10 list.
Encrypting Data in Transit
Say you’re on a public Wi-Fi network at a coffee shop or a hotel, or even a private network that’s been compromised in a previous attack. An attacker on the same network can sniff traffic as it’s sent across the network or even outside of the network to the Internet. This is called a Man in the Middle (MitM) attack. Users on the network have no way to know that an attacker is monitoring their network traffic in this way.
To mimic a MitM attack, I used Wireshark to capture the network traffic from my own computer. First, I logged in to a website that does not use encryption. This site uses HTTP to send the credentials in cleartext, which you can see listed in the captured POST request below:
Now, let’s see what happens when I submit my credentials over an encrypted HTTPS connection:
Even though I can sniff all network traffic between my computer and the website, I can not read the data itself. Private data such as user credentials, session tokens, and credit card numbers should always be encrypted in transit using HTTPS to protect users from man in the middle attacks.
Encoding is Not Encryption
We often run into websites that use Base64 or another form of encoding to protect data. It’s important to note that while encoding can be used to protect data from accidental manipulation or to make data easier to send and store, it does not offer any additional security than if the encoded data was to be left in cleartext. The fundamental difference between encoding and encryption is that while encoding is intentionally designed to be performed without the use of a key or secret, encryption is designed to protect the data from being accessed without such a key or secret.
In the example below, I used a free base64 decoding website to decode a base64 value. Feel free to use this website to see how easily you can decode this base64-encoded value:
RHJpbmsgWW91ciBPdmFsdGluZQ==
SSL Certificates
Now that you understand that sending private data in cleartext (or base64) is a bad idea and you’ve decided to use HTTPS for your site, you still need a valid SSL certificate before your site is secure. Without a valid SSL certificate, browsers will warn that your site is not secure as my browser did for this Nessus site that I run on my computer.
While errors like this can be acceptable on a private computer where I know what the site is and where I’m running it, it is not acceptable for an external website or an internal corporate network. An attacker trying to trick you into using a malicious site might use an invalid certificate such as the one above to trick you into using their site. Without a valid certificate, you cannot be sure whether the site is genuine and safe or not.
As a comparison, here is a site that uses a valid SSL certificate. The browser checks the certificate and validates that it’s secure.
The browser also allows you to view the certificate yourself, should you want more detailed information about the issuer, subject, or public key. You can view all the details of why the browser declares the certificate as secure.
It’s issued by a trusted Certificate Authority (CA), in this case a free CA called Let’s Encrypt
It hasn’t expired yet
It uses a strong encryption algorithm for the public key
This brings up a few common issues we see that make certificates insecure:
The certificate is expired. Many companies don’t have a master list of certificates and their expiration dates, which can lead to forgotten certificates that don’t get replaced until a penetration test or customer complaint highlights the issue.
The certificates is for the wrong website. Occasionally, companies many forward domain names to the server that uses the certificate or change existing ones without updating the certificate itself. Since the certificate wasn’t issued for the additional or modified domains, it is not a valid certificate for those domains.
The certificate is self-signed. While it’s generally okay to self-sign certificates for fully internal sites provided the signing authority has been added to the trusted certificate store for all devices on the network, all external-facing sites should use an SSL certificate signed by a trusted Certificate Authority (CA).
The dangers with untrusted self-signed certificates are:
An attacker could create their own self-signed certificate that looks nearly identical to the legitimate one.
Many users will refuse to use the site because of the security errors browsers will show.
Users who are willing to use the site will become accustomed to using sites with certificate errors, making them more likely to ignore these errors on a malicious site.
Encryption Ciphers and Protocols
Now that your webpages are being served over HTTPS with a valid certificate, let’s think about the strength of the encryption itself. Think of Julius Caesar using the now famous Caesar Cipher to encrypt secret military messages. Centuries later, it’s best not to use that cipher because it was cracked and is now common knowledge. If you’re interested in the history of encryption, Simon Singh’s The Code Book is a great read.
For the purposes of securing a website, we need to use complex encryption protocols and ciphers. Because attackers are always looking for flaws in these protections, we must always keep one step ahead and be sure to avoid using any weaker ciphers or protocols. SSLScan is a free tool that runs on Linux, Windows and macOS. Here is an SSLScan for https://raxis.com/. For a free online tool that can assess the ciphers and protocols on external facing sites, check out Qualys’ SSL Server Test. SSLScan shows all protocols and ciphers that the site accepts. Weak protocols and ciphers, if they were enabled for the site, would be listed in yellow or red.
Two types of protocols are commonly used: SSL (Secure Sockets Layer) and TLS (Transport Layer Security). All versions of SSL have been compromised and should not be used. There are multiple known exploits that attackers can use to gain access to your data or cause a denial of service attack if the data is transmitted using SSL. At the time of writing, the recommended protocol is TLSv1.2, although TLSv1.1 can be secure if configured correctly. TLSv1.3 is currently being drafted, so be sure to keep an eye out for that in the future. We always recommend removing weak protocols, such as SSLv2, SSLv3, and TLSv1.0, because SSL/TLS implementations support a downgrade process that allows an attacker to bypass a server’s secure protocols in favor of less secure protocols.
Each protocol provides many cipher suites, some more secure than others. Weak ciphers must be explicitly disabled on the server to ensure that cryptographically strong ciphers are used when transmitting private data. In the example above, https://raxis.com/ only uses strong ciphers and protocols.
What Web Developers Can Do to Protect Their Sites
There are a number of things that web developers and administrators can do to keep their sites secure and their users safe.
Ensure that all private data, including credentials, session tokens, credit cards, medical data, and so on are encrypted in transit by enforcing the use of HTTPS.
All resources on a page must be encrypted for the page to be considered secure. This means that graphics, scripts, and other behind-the-scenes files should be served over HTTPS as well.
Use only certificates issued by a trusted Certificate Authority (CA).
Replace certificates before they expire. Keep track of your company’s certificates using a tool, such as Venafi’s TLS Protect. This tool can even automatically replace expiring certificates.
Use the TLSv1.2 protocol and disable older protocols where possible. If your site needs to support browser versions released prior to 2013, enable TLSv1.1, as these older versions do not support TLSv1.2. See Can I Use to see which browsers supports support TLSv1.2.
Disable weak ciphers within your protocols’ cipher suites. Test thoroughly before making changes in production to ensure that your site doesn’t have outages. OWASP’s Transport Layer Protection Cheat Sheet is a great resource.
Annual penetration tests can help you your web applications and external network continue to provide strong encryption to your users. Raxis’ Baseline Security Assessment can also help you keep a monthly watch for any issues.
What Users Can Do to Protect Themselves
While it’s up to website administrators to keep websites secure, users of these sites can keep themselves safe by being vigilant. Watch your browser. Never enter private data such as passwords, credit card numbers, and medical information if the website you’re using:
Starts with http://
Starts with https:// but shows a security error
Close your browser if you see a security error when you are already logged in. Your private data could be sniffed by an attacker every time you submit a form or click a link.
When on public Wi-Fi, such as free Wi-Fi offered in hotels or restaurants, use a VPN such as privateinternetaccess to ensure your private information is encrypted in transit.