Securing the Forms Authentication Cookie with Secure Flag

March 25, 2024 by · Comments Off on Securing the Forms Authentication Cookie with Secure Flag
Filed under: Development, Security 

One of the recommendations for securing cookies within a web application is to apply the Secure attribute. Typically, this is only a browser directive to direct the browser to only include the cookie with a request if it is over HTTPS. This helps prevent the cookie from being sent over an insecure connection, like HTTP. There is a special circumstance around the Microsoft .Net Forms Authentication cookie that I want to cover as it can become a difficult challenge.

The recommended way to set the secure flag on the forms authentication cookie is to set the requireSSL attribute in the web.config as shown below:

<authentication mode="Forms">
 <forms name="Test.web" defaultUrl="~/Default" loginUrl="~/Login" path="/" requireSSL="true" protection="All" timeout="20" slidingExpiration="true"/>
</authentication>

In the above code, you can see that requireSSL is set to true.

In your login code, you would have code similar to this once the user is properly validated:

FormsAuthentication.SetAuthCookie(userName, false);

In this case, the application will automatically set the secure attribute.

Simple, right? In most cases this is very straight forward. Sometimes, it isn’t.

Removing HTTPS on Internal Networks

There are many occasions where the application may remove HTTPS on the internal network. This is commonly seen, or used to be commonly seen, when the application is behind a load balancer. The end user would connect using HTTPS over the internet to the load balancer and then the load balancer would connect to the application server over HTTP. One common reason for this was easier inspection of internal network traffic for monitoring by the organization.

Why Does This Matter? Isn’t the Secure Attribute of a Cookie Browser Only?

There was an interesting decision made by Microsoft when they created .Net Forms Authentication. A decision was made to verify that the connection to the application server is secure if the requireSSL attribute is set on the authentication cookie (specified above in the web.config example), when the cookie is set using FormsAuthentication.SetCookie(). This is the only cookie that the code checks to verify it is on a secure connection.

This means that if you set requireSSL in the web.config for the forms authentication cookie and the request to the server is over HTTP, .Net will throw an exception.

This code can be seen in https://github.com/microsoft/referencesource/blob/master/System.Web/Security/FormsAuthenticationModule.cs in the ExtractTicketFromCookie method:

The obvious solution is to enable HTTPS on the traffic all the way to the application server. But sometimes as developers, we don’t have that control.

What Can We Do?

There are two options I want to discus. This first one actually doesn’t provide a complete solution, even though it seems like it would.

FormsAuthentication.GetAuthCookie

At first glance, FormsAuthentication.GetAuthCookie might seem like a good option. Instead of using SetAuthCookie, which we know will break the application in this case, one might try to call GetAuthCookie and just manage the cookie themselves. Here is what that might look like, once the user is validated.

var cookie = FormsAuthentication.GetAuthCookie(userName, false);
cookie.Secure = true;
HttpContext.Current.Response.Cookies.Add(cookie);

Rather than call SetAuthCookie, here we are trying to just get a new AuthCookie and we can set the secure flag and then add it to the cookies collection manually. This is similar to what the SetAuthCookie method does, with the exception that it is not setting the secure flag based on the web.config setting. That is the key difference here.

Keep in mind that GetAuthCookie returns a new auth ticket each time it is called. It does not return the ticket or cookie that was created if you previously called SetAuthCookie.

So why doesn’t this work completely?

Upon initial testing, this method appears to work. If you run the application and log in, you will see that the cookie does have the secure flag set. This is just like our initial response at the beginning of this article. The issue here comes with SlidingExpiration. If you remember, in our web.config, we had SlidingExpiration set to true (the default is true, so it was set in the config just for visual aide).

<authentication mode="Forms">
 <forms name="Test.web" defaultUrl="~/Default" loginUrl="~/Login" path="/" requireSSL="true" protection="All" timeout="20" slidingExpiration="true"/>
</authentication>

Sliding expiration means that once the ticket is reached over 1/2 of its age and is submitted with a request, it will auto renew. This allows a user that is active to not have to re-authentication after the timeout. This re-authentication occurs again in https://github.com/microsoft/referencesource/blob/master/System.Web/Security/FormsAuthenticationModule.cs in the OnAuthenticate method.

We can see in the above code that the secure flag is again being set by the FormsAuthentication.RequireSSL web.config setting. So even though we jumped through some hoops to set it manually during the initial authentication, every ticket renewal will reset it back to the config setting.

In our web.config we had a timeout set to 20 minutes. So requesting a page after 12 minutes will result in the application setting a new FormsAuth cookie without the Secure attribute.

This option would most likely get scanners and initial testers to believe the problem was remediated. However, after a potentially short period of time (depending on your timeout setting), the cookie becomes insecure again. I don’t recommend trying to set the secure flag in this manner.

Option 2 – Global.asax

Another option is to use the Global.asax Application_EndRequest event to set the cookie to secure. With this configuration, the web.config would not set requireSSL to true and the authentication would just use the FormsAuthentication.SetAuthCookie() method. Then, in the Global.asax file we could have something similar to the following code.

protected void Application_EndRequest(object sender, EventArgs e)
{
    if(Response.Cookies.Count > 0)
    {
        for(int i = 0; i < Response.Cookies.Count; i++)
        {
            //Response.Cookies[i].Secure = true;
        }
    }
}

The above code will loop through all the cookies setting the secure flag on them on every request. It is possible to go one step further and check the cookie name to see if it matches the forms authentication cookie before setting the secure flag, but hopefully your site runs on HTTPS so there should be no harm in all cookies having the secure flag set.

There have been debate on how well this method works based on when the EndRequest event will fire and if it fires for all requests or not. In my local testing, this method appears to work and the secure flag is set properly. However, that was local testing with minimal traffic. I cannot verify the effectiveness on a site with heavy loads or load balancing or other configurations. I recommend testing this out thoroughly before using it.

Conclusion

For something that should be so simple, this situation can be a headache for developers that don’t have any control over the connection to their application. In the end, the ideal situation would be to enable HTTPS to the application server so the requireSSL flag can be set in the web.config. If that is not possible, then there are some workarounds that will hopefully work. Make sure you are doing the appropriate testing to verify the workaround works 100% of the time and not just once or twice. Otherwise, you may just be checking a box to show an example of it being marked secure, when it is not in all cases.

Does ASP:Textbox TextMode Securely Enforce Input Validation?

December 11, 2023 by · Comments Off on Does ASP:Textbox TextMode Securely Enforce Input Validation?
Filed under: Development, Security 

When building .Net Webform applications, the ASP:Textbox has a TextMode property that you can set. For example, you could indicate that the text should be a number by setting the property below:

<asp:TextBox ID=”txtNumber” runat=”server” TextMode=”Number” />

As you can see in the above example, we are specifically setting the TextMode attribute to Number. You can see a list of all the available modes at: https://learn.microsoft.com/en-us/dotnet/api/system.web.ui.webcontrols.textboxmode?view=netframework-4.8.1.

But what does this actually mean? Is it limiting my input to just a number or can this be bypassed?

It is important to understand how this input validation works because we don’t want to make assumptions from a security perspective on what this field may contain. So many attacks start with the input and our first line of defense is input validation. Limiting a field to just a Number or a Date object can mitigate a lot of attacks. However, we need to be positive that the validation is enforced.

Let’s take a look at what this attribute is doing.

From a display standpoint, the simple code we entered above in our .ASPX page turns into the following in the browser:

<input name=”txtNumber” type=”number” id=”txtNumber” />

We can see that the TextMode attribute is controlling the Type attribute in the actual response. By default, the ASP:Textbox would return a type of Text, but here it is set to Number.

The number type will change the textbox that the user uses so that it is limiting to basically just numbers. The image below shows the new field.

Number box

If you try to submit the value with values other than numbers, it will display a message that indicates only numbers are allowed.

Number box error

** Note that the restrictions for character input in the textbox itself may differ depending on the browser. Edge will only allow numbers and the letter ‘e’ to be input at all. If you try to enter ‘test’ it will only show the ‘e’. Whereas, Firefox will allow the characters to be added to the textbox. They both will alert the error though when clicking the submit button that it can only be numbers.

This initial testing validates that there is some client-side validation going on. This is great for immediate user feedback, but not for security. We need to ensure that the validation is happening on the server as well. It is too easy to bypass client-side validation and we should never trust it. Always verify your validation at the server level.

To test the server-side validation, we will just intercept the request using a web proxy. I use Burp Suite from Portswigger. Once you have the proxy configured we can turn intercept on and wait for the form to be submitted. Remember, the client-side validation is enforcing numbers, so we need to just enter a regular number in the textbox to submit the form. We will change this to something else in the intercept window. Here we can see the number passed up in the paused request.

Intercept 1

Next, we will modify the number to a regular string.

Intercept 2

Once we click turn intercept off, the request will now get to the code-behind. Let’s see what the Server now sees as the value of the textbox.

Server text 1

We can see that the value is “someTextString”, indicating that there is no validation happening on the server side. This means that while the TextMode will cause a change to how the textbox works on the client, it doesn’t have any effect on how it works on the server.

How do we add server side validation?

There are a few ways to do this, depending on how your team works. one way would be to try and parse the txtNumber.Text value into an Int or some other numeric type. If successful, you know you have just a number, if it fails, the data is no good.

Another way would be to add a regular expression validator to the form. This could look like this:

Regex1

Here we have the regular expression ‘^[0-9]*$’ configured to only allow the digits 0-9. The great thing about these validators is that they provide both client and server-side validation out of the box. There is just one caveat to that. We have to make sure to check the Page.IsValid property, otherwise the server-side check will not be enforced. This would look like this:

Regex 2

With this new check, if the text matches the regular expression, then everything will work as expected. If it does not match, then an error message is returned indicating that it is invalid text.

Wrap Up

Understanding the framework and how different features work is critical in providing good security. It is easy to assume that because we set the type to number that the server will also enforce that. Unfortunately, that is not always the case. Here we clearly see that while client-side validation is being enforced, there are no matching enforcements on the server. This leaves our application open to multiple vulnerabilities, such as SQL Injection, Cross-Site Scripting, etc., depending on how that data is used.

Always make sure that your validation is happening at the server.

Disabling SpellCheck on Sensitive Fields

January 18, 2023 by · Comments Off on Disabling SpellCheck on Sensitive Fields
Filed under: Development, Security 

Do you know what happens when a browser performs spell checking on an input field?

Depending on the configuration of the browser, for example with the enhanced spell check feature of Chrome, it may be sending those values out to Google. This could potentially put sensitive data at risk so it may be a good idea to disable spell checking on those fields. Let’s see how we can do this.

Simple TextBox

   <input type=“text” spellcheck=“false”>

Text Area

     <textarea spellcheck=“false”></textarea>

You could also cover the entire form by setting it at the form level as shown below:

     <form spellcheck=“false”>

Conclusion

It is important to point out that password fields can also be vulnerable to this if they have the “show password” option. In these cases, it is recommended to disable spell checking on the password field as well as other sensitive fields. 

If the field might be sensitve, and doesn’t benefit from spellcheck, it might be a good idea to disable this feature.

What is the difference between encryption and hashing?

August 29, 2022 by · Comments Off on What is the difference between encryption and hashing?
Filed under: Development, Security 

Encryption is a reversible process, whereas hashing is one-way only. Data that has been encrypted can be decrypted back to the original value. Data that has been hashed cannot be transformed back to its original value.

Encryption is used to protect sensitive information like Social Security Numbers, credit card numbers or other sensitive information that may need to be accessed at some point.

Hashing is used to create data signatures or comparison only features. For example, user passwords used for login should be hashed because the program doesn’t need to store the actual password. When the user attempts to log in, the system will generate a hash of the supplied password using the same technique as the one stored and compare them. If they match, the passwords are the same.

Another example scenario with hashing is with file downloads to verify integrity. The supplier of the file will create a hash of the file on the server so when you download the file you can then generate the hash locally and compare them to make sure the file is correct.

XmlSecureResolver: XXE in .Net

August 22, 2022 by · Comments Off on XmlSecureResolver: XXE in .Net
Filed under: Development, Security, Testing 

tl;dr

  • Microsoft .Net 4.5.2 and above protect against XXE by default.
  • It is possible to become vulnerable by explicitly setting a XmlUrlResolver on an XmlDocument.
  • A secure alternative is to use the XmlSecureResolver object which can limit allowed domains.
  • XmlSecureResolver appeared to work correctly in .Net 4.X, but did not appear to work in .Net 6.

I wrote about XXE in .net a few years ago (https://www.jardinesoftware.net/2016/05/26/xxe-and-net/) and I recently starting doing some more research into how it works with the later versions. At the time, the focus was just on versions prior to 4.5.2 and then those after it. At that time there wasn’t a lot after it. Now we have a few versions that have appeared and I started looking at some examples.

I stumbled across the XmlSecureResolver class. It peaked my curiosity, so I started to figure out how it worked. I started with the following code snippet. Note that this was targeting .Net 6.0.

static void Load()
{
    string xml = "<?xml version='1.0' encoding='UTF-8' ?><!DOCTYPE foo [<!ENTITY xxe SYSTEM 'https://labs.developsec.com/test/xxe_test.php'>]><root><doc>&xxe;</doc><foo>Test</foo></root>";

    XmlDocument xmlDoc = new XmlDocument();
    xmlDoc.XmlResolver = new XmlSecureResolver(new XmlUrlResolver(), "https://www.jardinesoftware.com");
    
    xmlDoc.LoadXml(xml);
    Console.WriteLine(xmlDoc.InnerText);
    Console.ReadLine();
}

So what is the expectation?

  • At the top of the code, we are setting some very simple XML
    • The XML contains an Entity that is attempting to pull data from the labs.developsec.com domain.
    • The xxe_test.php file simply returns “test from the web..”
  • Next, we create an XmlDocument to parse the xml variable.
    • By Default (after .net 4.5.1), the XMLDocument does not parse DTDs (or external entities).
  • To Allow this, we are going to set the XmlResolver. In This case, we are using the XmlSecureResolver assuming it will help limit the entities that are allowed to be parsed.
    • I will set the XmlSecureResolver to a new XmlUrlResolver.
    • I will also set the permission set to limit entities to the jardinesoftware.com domain.
  • Finally, we load the xml and print out the result.

My assumption was that I should get an error because the URL passed into the XmlSecureResolver constructor does not match the URL that is referenced in the xml string’s entity definition.

https://www.jardinesoftware.com != https://labs.developsec.com

Can you guess what result I received?

If you guessed that it worked just fine and the entity was parsed, you guessed correct. But why? The whole point of XMLSecureResolver is to be able to limit this to only entities from the allowed domain.

I went back and forth for a few hours trying different configurations. Accessing different files from different locations. It worked every time. What was going on?

I then decided to switch from my Mac over to my Windows system. I loaded up Visual Studio 2022 and pulled up my old project from the old blog post. The difference? This project was targeting the .Net 4.8 framework. I ran the exact same code in the above snippet and sure enough, I received an error. I did not have permission to access that domain.

Finally, success. But I wasn’t satisfied. Why was it not working on my Mac. So I added the .Net 6 SDK to my Windows system and changed the target. Sure enough, no exception was thrown and everything worked fine. The entity was parsed.

Why is this important?

The risk around parsing XML like this is a vulnerability called XML External Entities (XXE). The short story here is that it allows a malicious user to supply an XML document that references external files and, if parsed, could allow the attacker to read the content of those files. There is more to this vulnerability, but that is the simple overview.

Since .net 4.5.2, the XmlDocument object was protected from this vulnerability by default because it sets the XmlResolver to null. This blocks the parser from parsing any external entities. The concern here is that a user may have decided they needed to allow entities from a specific domain and decided to use the XmlSecureResolver to do it. While that seemed to work in the 4.X versions of .Net, it seems to not work in .Net 6. This can be an issue if you thought you were good and then upgraded and didn’t realize that the functionality changed.

Conclusion

If you are using the XmlSecureResolver within your .Net application, make sure that it is working as you expect. Like many things with Microsoft .Net, everything can change depending on the version you are running. In my test cases, .Net 4.X seemed to work properly with this object. However, .Net 6 didn’t seem to respect the object at all, allowing DTD parsing when it was unexpected.

I did not opt to load every version of .Net to see when this changed. It is just another example where we have to be conscious of the security choices we make. It could very well be this is a bug in the platform or that they are moving away from using this an a way to allow specific domains. In either event, I recommend checking to see if you are using this and verifying if it is working as expected.

Input Validation for Security

August 22, 2022 by · Comments Off on Input Validation for Security
Filed under: Development, Security 

Validating input is an important step for reducing risk to our applications. It might not eliminate the risk, and for that reason we should consider what exactly we are doing with input validation.

Should you be looking for every attack possible?

Should you create a list of every known malicious payload?

When you think about input validation are you focusing on things like Cross-site Scripting, SQL Injection, or XXE, just to name a few? How feasible is it to attempt to block all these different vulnerabilities with input validation? Are each of these even a concern for your application?

I think that input validation is important. I think it can help reduce the ability for many of these vulnerabilities. I also think that our expectation should be aligned with what we can be doing with input validation. We shouldn’t overlook these vulnerabilities, but instead realize appropriate limitations. All of these vulnerabilities have a counterpart, such as escaping, output encoding, parser configurations, etc. that will complete the appropriate mitigation.

If we can’t, or shouldn’t, block all vulnerabilities with input validation, what should we focus on?

Start with focusing on what is acceptable data. It might be counter-intuitive, but malicious data could be acceptable data from a data requirements statement. We define our acceptable data with specific constraints. These typically fall under the following categories:

* Range – What are the bounds of the data? ex. Age can only be between 0 and 150.
* Type – What type of data is it? Integer, Datetime, String.
* Length – How many characters should be allow?
* Format – Is there a specific format? Ie. SSN or Account Number

As noted, these are not specifically targeting any vulnerability. They are narrowing the capabilities. If you verify that a value is an Integer, it is hard to get typical injection exploits. This is similar to custom format requirements. Limiting the length also restricts malicious payloads. A state abbreviation field with a length of 2 is much more difficult to exploit.

The most difficult type is the string. Here you may have more complexity and depending on your purpose, might actually have more specific attacks you might look for. Maybe you allow some HTML or markup in the field. In that case, you may have more advanced input validation to remove malicious HTML or events.

There is nothing wrong with using libraries that will help look for malicious attack payloads during your input validation. However, the point here is to not spend so much time focusing on blocking EVERYTHING when that is not necessary to get the product moving forward. Understand the limitation of that input validation and ensure that the complimenting controls like output encoding are properly engaged where they need to be.

The final point I want to make on input validation is where it should happen. There are two options: on the client, or on the server. Client validation is used for immediate feedback to the user, but it should never be used for security.

It is too easy to bypass client-side validation routines, so all validation should also be checked on the server. The user doesn’t have the ability to bypass controls once the data is on the server. Be careful with how you try to validate things on the client directly.

Like anything we do with security, understand the context and reasoning behind the control. Don’t get so caught up in trying to block every single attack that you never release. There is a good chance something will get through your input validation. That is why it is important to have other controls in place at the point of impact. Input validation limits the amount of bad traffic that can get to the important functions, but the functions still may need to do additional processes to be truly secure.

XXE DoS and .Net

May 6, 2019 by · Comments Off on XXE DoS and .Net
Filed under: Development, Security 

External XML Entity (XXE) vulnerabilities can be more than just a risk of remote code execution (RCE), information leakage, or server side request forgery (SSRF). A denial of service (DoS) attack is commonly overlooked. However, given a mis-configured XML parser, it may be possible for an attacker to cause a denial of service attack and block your application’s resources. This would limit the ability for a user to access the expected application when needed.

In most cases, the parser can be configured to just ignore any entities, by disabling DTD parsing. As a matter of fact, many of the common parsers do this by default. If the DTD is not processed, then even the denial of service risk should be removed.

For this post, I want to talk about if DTDs are parsed and focus specifically on the denial of service aspect. One of the properties that becomes important when working with .Net and XML is the MaxCharactersFromEntities property.

The purpose of this property is to limit how long the value of an entity can be. This is important because often times in a DoS attempt, the use of expanding entities can cause a very large request with very few actual lines of data. The following is an example of what a DoS attack might look like in an entity.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE foo [ <!ELEMENT foo ANY >
<!ENTITY dos 'dos' >
<!ENTITY dos1 '&dos;&dos;&dos;&dos;&dos;&dos;&dos;&dos;&dos;&dos;&dos;&dos;' >
<!ENTITY dos2 '&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;' >
<!ENTITY dos3 '&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;' >
<!ENTITY dos4 '&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;' >
<!ENTITY dos5 '&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;' >
<!ENTITY dos6 '&dos5;&dos5;&dos5;&dos5;&dos5;&dos5;&dos5;&dos5;&dos5;&dos5;&dos5;&dos5;' >]>

Notice in the above example we have multiple entities that each reference the previous one multiple times. This results in a very large string being created when dos6 is actually referenced in the XML code. This would probably not be large enough to actually cause a denial of service, but you can see how quickly this becomes a very large value.

To help protect the XML parser and the application the MaxCharactersFromEntities helps limit how large this expansion can get. Once it reaches the max amount, it will throw a System.XmlXmlException: ‘The input document has exceeded a limit set by MaxCharactersFromEntities’ exception.

The Microsoft documentation (linked above) states that the default value is 0. This means that it is undefined and there is no limit in place. Through my testing, it appears that this is true for ASP.Net Framework versions up to 4.5.1. In 4.5.2 and above, as well as .Net Core, the default value for this property is 10,000,000. This is most likely a small enough value to protect against denial of service with the XmlReader object.

Intro to npm-audit

June 27, 2018 by · 1 Comment
Filed under: Development, Security, Testing 

Our applications rely more and more on external packages to enable quick deployment and ease of development. While these packages help reduce the code we have to write ourselves, it still may present risk to our application.

If you are building Nodejs applications, you are probably using npm to manage your packages. For those that don’t know, npm is the node package manager. It is a direct source to quickly include functionality within your application. For example, say you want to hash your user passwords using bcrypt. To do that, you would grab a bcrypt package from npm. The following is just one of the bcrypt packages available:

https://www.npmjs.com/package/bcrypt

Each package we may use may also rely on other packages. This creates a fairly complex dependency graph of code used within your application you have no part in writing.

Tracking vulnerable components

It can be fairly difficult to identify issues related to these packages, never mind their sub packages. We all can’t run our own static analysis on each package we use, so identifying new vulnerabilities is not very easy. However, there are many tools that work to help identify known vulnerabilities in these packages.

When a vulnerability is publicly disclosed it receives an identifier (CVE). The vulnerability is tracked at https://cve.mitre.org/ and you can search these to identify what packages have known vulnerabilities. Manually searching all of your components doesn’t seem like the best approach.

Fortunately, npm actually has a module for doing just this. It is npm-audit. The package was included starting with npm 6.0. If you are using an earlier version of npm, you will not find it.

To use this module, you just need to be in your application directory (the same place you would do npm start) and just run:

npm audit.

On the surface, it is that simple. You can see the output of me running this on a small project I did below:

Npm audit

As you can see, it produces a report of any packages that may have known vulnerabilities. It also includes a few details about what that issue is.

To make this even better, some of the vulnerabilities found may actually be fixed automatically. If that is available, you can just run:

npm audit fix.

The full details of the different parameters can be found on the npm-audit page at https://docs.npmjs.com/cli/audit.

If you are doing node development or looking to automate identifying these types of issues, npm-audit may be worth a look. The more we can automate the better. Having something simple like this to quickly identify issues is invaluable. Remember, just because a component may be flagged as having a vulnerability, it doesn’t mean you are using that code or that your app is guaranteed vulnerable. Take the effort to determine the risk level for your application and organization. Of course, we should strive to be on the latest versions to avoid vulnerabilities, but we know reality diverts from what we wish for.

Have you been using npm-audit? Let me know. I am interested in your stories of success or failure to learn how others implement these things.

XSS in Script Tag

June 27, 2018 by · Comments Off on XSS in Script Tag
Filed under: Development, Security, Testing 

Cross-site scripting is a pretty common vulnerability, even with many of the new advances in UI frameworks. One of the first things we mention when discussing the vulnerability is to understand the context. Is it HTML, Attribute, JavaScript, etc.? This understanding helps us better understand the types of characters that can be used to expose the vulnerability.

In this post, I want to take a quick look at placing data within a <script> tag. In particular, I want to look at how embedded <script> tags are processed. Let’s use a simple web page as our example.

<html>
	<head>
	</head>
	<body>
	<script>
		var x = "<a href=test.html>test</a>";
	</script>
	</body>
</html>

The above example works as we expect. When you load the page, nothing is displayed. The link tag embedded in the variable is rated as a string, not parsed as a link tag. What happens, though, when we embed a <script> tag?

<html>
	<head>
	</head>
	<body>
	<script>
		var x = "<script>alert(9)</script>";
	</script>
	</body>
</html>

In the above snippet, actually nothing happens on the screen. Meaning that the alert box does not actually trigger. This often misleads people into thinking the code is not vulnerable to cross-site scripting. if the link tag is not processed, why would the script tag be. In many situations, the understanding is that we need to break out of the (“) delimiter to start writing our own JavaScript commands. For example, if I submitted a payload of (test”;alert(9);t = “). This type of payload would break out of the x variable and add new JavaScript commands. Of course, this doesn’t work if the (“) character is properly encoded to not allow breaking out.

Going back to our previous example, we may have overlooked something very simple. It wasn’t that the script wasn’t executing because it wasn’t being parsed. Instead, it wasn’t executing because our JavaScript was bad. Our issue was that we were attempting to open a <script> within a <script>. What if we modify our value to the following:

<html>
	<head>
	</head>
	<body>
	<script>
		var x = "</script><script>alert(9)</script>";
	</script>
	</body>
</html>

In the above code, we are first closing out the original <script> tag and then we are starting a new one. This removes the embedded nuance and when the page is loaded, the alert box will appear.

This technique works in many places where a user can control the text returned within the <script> element. Of course, the important remediation step is to make sure that data is properly encoded when returned to the browser. By default, Content Security Policy may not be an immediate solution since this situation would indicate that inline scripts are allowed. However, if you are limiting the use of inline scripts to ones with a registered nonce would help prevent this technique. This reference shows setting the nonce (https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/script-src).

When testing our applications, it is important to focus on the lack of output encoding and less on the ability to fully exploit a situation. Our secure coding standards should identify the types of encoding that should be applied to outputs. If the encodings are not properly implemented then we are citing a violation of our standards.

JavaScript in an HREF or SRC Attribute

November 30, 2017 by · Comments Off on JavaScript in an HREF or SRC Attribute
Filed under: Development, Security, Testing 

The anchor (<a>) HTML tag is commonly used to provide a clickable link for a user to navigate to another page. Did you know it is also possible to set the HREF attribute to execute JavaScript. A common technique is to use the onclick event of the anchor tab to execute a JavaScript method when the user clicks the link. However, to stop the browser from actually redirecting the HREF can be set to javascript:void(0);. This cancels the HREF functionality and allows the JavaScript from the onclick to execute as expected.

In the above example, notice that the HREF is set with a value starting with “javascript:”. This identifier tells the browser to execute the code following that prefix. For those that are security savvy, you might be thinking about cross-site scripting when you hear about executing JavaScript within the browser. For those of you that are new to security, cross-site scripting refers to the ability for an attacker to execute unintended JavaScript in the context of your application (https://www.owasp.org/index.php/Cross-site_Scripting_(XSS)).

I want to walk through a simple scenario of where this could be abused. In this scenario, the application will attempt to track the page the user came from to set up where the Cancel button will redirect to. Imagine you have a list page that allows you to view details of a specific item. When you click the item it takes you to that item page and passes a BackUrl in the query string. So the link may look like:

https://jardinesoftware.com/item.php?backUrl=/items.php

On the page, there is a hyperlink created that sets the HREF to the backUrl property, like below:

<a href=”<?php echo $_GET[“backUrl”];?>”>Back</a>

When the page executes as expected you should get an output like this:

<a href=”/items.php”>Back</a>

There is a big problem though. The application is not performing any type of output encoding to protect against cross-site scripting. If we instead pass in backUrl=”%20onclick=”alert(10); we will get the following output:

<a href=”” onclick=”alert(10);“>Back</a>

In the instance above, we have successfully inserted the onclick event by breaking out of the HREF attribute. The bold section identifies the malicious string we added. When this link is clicked it will prompt an alert box with the number 10.

To remedy this, we could (or typically) use output encoding to block the escape from the HREF attribute. For example, if we can escape the double quotes (” -> &quot; then we cannot get out of the HREF attribute. We can do this (in PHP as an example) using htmlentities() like this:

<a href=”<?php echo htmlentities($_GET[“backUrl”],ENT_QUOTES);?>”>Back</a>

When the value is rendered the quotes will be escapes like the following:

<a href=”&quot; onclick=&"alert(10);“>Back</a>

Notice in this example, the HREF actually has the entire input (in bold), rather than an onclick event actually being added. When the user clicks the link it will try to go to https://www.developsec.com/” onclick=”alert(10); rather than execute the JavaScript.

But Wait… JavaScript

It looks like we have solved the XSS problem, but there is a piece still missing. Remember at the beginning of the post how we mentioned the HREF supports the javascript: prefix? That will allow us to bypass the current encodings we have performed. This is because with using the javascript: prefix, we are not trying to break out of the HREF attribute. We don’t need to break out of the double quotes to create another attribute. This time we will set backUrl=javascript:alert(11); and we can see how it looks in the response:

<a href=”javascript:alert(11);“>Back</a>

When the user clicks on the link, the alert will trigger and display on the page. We have successfully bypassed the XSS protection initially put in place.

Mitigating the Issue

There are a few steps we can take to mitigate this issue. Each has its pros and many can be used in conjunction with each other. Pick the options that work best for your environment.

  • URL Encoding – Since the HREF is meant to be a URL, you could perform URL encoding. URL encoding will render the javascript benign in the above instances because the colon (:) will get encoded. You should be using URL encoding for URLs anyway, right?
  • Implement Content Security Policy (CSP) – CSP can help limit the ability for inline scripts to be executed. In this case, it is an inline script so something as simple as ‘Content-Security-Policy:default-src ‘self’ could be sufficient. Of course, implementing CSP requires research and great care to get it right for your application.
  • Validate the URL – It is a good idea to validate that the URL used is well formed and pointing to a relative path. If the system is unable to parse the URL then it should not be used and a default back URL can be substituted.
  • URL White Listing – Creating a white list of valid URLs for the back link can be effective at limiting what input is used by the end user. This can cut down on the values that are actually returned blocking any malicious scripts.
  • Remove javascript: – This really isn’t recommended as different encodings can make it difficult to effectively remove the string. The other techniques listed above are much more effective.

The above list is not exhaustive, but does give an idea of ways to help reduce the risk of JavaScript within the HREF attribute of a hyper link.

Iframe SRC

It is important to note that this situation also applies to the IFRAME SRC attribute. it is possible to set the SRC of an IFRAME using the javascript: notation. In doing so, the javascript executes when the page is loaded.

Wrap Up

When developing applications, make sure you take this use case into consideration if you are taking URLs from user supplied input and setting that in an anchor tag or IFrame SRC.

If you are responsible for testing applications, take note when you identify URLs in the parameters. Investigate where that data is used. If you see it is used in an anchor tag, look to see if it is possible to insert JavaScript in this manner.

For those performing static analysis or code review, look for areas where the HREF or SRC attributes are set with untrusted data and make sure proper encoding has been applied. This is less of a concern if the base path of the URL has been hard-coded and the untrusted input only makes up parameters of the URL. These should still be properly encoded.

Next Page »