Does ASP:Textbox TextMode Securely Enforce Input Validation?

December 11, 2023 by · Comments Off on Does ASP:Textbox TextMode Securely Enforce Input Validation?
Filed under: Development, Security 

When building .Net Webform applications, the ASP:Textbox has a TextMode property that you can set. For example, you could indicate that the text should be a number by setting the property below:

<asp:TextBox ID=”txtNumber” runat=”server” TextMode=”Number” />

As you can see in the above example, we are specifically setting the TextMode attribute to Number. You can see a list of all the available modes at: https://learn.microsoft.com/en-us/dotnet/api/system.web.ui.webcontrols.textboxmode?view=netframework-4.8.1.

But what does this actually mean? Is it limiting my input to just a number or can this be bypassed?

It is important to understand how this input validation works because we don’t want to make assumptions from a security perspective on what this field may contain. So many attacks start with the input and our first line of defense is input validation. Limiting a field to just a Number or a Date object can mitigate a lot of attacks. However, we need to be positive that the validation is enforced.

Let’s take a look at what this attribute is doing.

From a display standpoint, the simple code we entered above in our .ASPX page turns into the following in the browser:

<input name=”txtNumber” type=”number” id=”txtNumber” />

We can see that the TextMode attribute is controlling the Type attribute in the actual response. By default, the ASP:Textbox would return a type of Text, but here it is set to Number.

The number type will change the textbox that the user uses so that it is limiting to basically just numbers. The image below shows the new field.

Number box

If you try to submit the value with values other than numbers, it will display a message that indicates only numbers are allowed.

Number box error

** Note that the restrictions for character input in the textbox itself may differ depending on the browser. Edge will only allow numbers and the letter ‘e’ to be input at all. If you try to enter ‘test’ it will only show the ‘e’. Whereas, Firefox will allow the characters to be added to the textbox. They both will alert the error though when clicking the submit button that it can only be numbers.

This initial testing validates that there is some client-side validation going on. This is great for immediate user feedback, but not for security. We need to ensure that the validation is happening on the server as well. It is too easy to bypass client-side validation and we should never trust it. Always verify your validation at the server level.

To test the server-side validation, we will just intercept the request using a web proxy. I use Burp Suite from Portswigger. Once you have the proxy configured we can turn intercept on and wait for the form to be submitted. Remember, the client-side validation is enforcing numbers, so we need to just enter a regular number in the textbox to submit the form. We will change this to something else in the intercept window. Here we can see the number passed up in the paused request.

Intercept 1

Next, we will modify the number to a regular string.

Intercept 2

Once we click turn intercept off, the request will now get to the code-behind. Let’s see what the Server now sees as the value of the textbox.

Server text 1

We can see that the value is “someTextString”, indicating that there is no validation happening on the server side. This means that while the TextMode will cause a change to how the textbox works on the client, it doesn’t have any effect on how it works on the server.

How do we add server side validation?

There are a few ways to do this, depending on how your team works. one way would be to try and parse the txtNumber.Text value into an Int or some other numeric type. If successful, you know you have just a number, if it fails, the data is no good.

Another way would be to add a regular expression validator to the form. This could look like this:

Regex1

Here we have the regular expression ‘^[0-9]*$’ configured to only allow the digits 0-9. The great thing about these validators is that they provide both client and server-side validation out of the box. There is just one caveat to that. We have to make sure to check the Page.IsValid property, otherwise the server-side check will not be enforced. This would look like this:

Regex 2

With this new check, if the text matches the regular expression, then everything will work as expected. If it does not match, then an error message is returned indicating that it is invalid text.

Wrap Up

Understanding the framework and how different features work is critical in providing good security. It is easy to assume that because we set the type to number that the server will also enforce that. Unfortunately, that is not always the case. Here we clearly see that while client-side validation is being enforced, there are no matching enforcements on the server. This leaves our application open to multiple vulnerabilities, such as SQL Injection, Cross-Site Scripting, etc., depending on how that data is used.

Always make sure that your validation is happening at the server.

Chrome is making some changes… Are you Ready?

February 10, 2020 by · Comments Off on Chrome is making some changes… Are you Ready?
Filed under: Development, Security 

Last year, Chrome announced that it was making a change to default cookies to SameSite:Lax if there is no SameSite setting explicitly set. I wrote about this change last year (https://www.jardinesoftware.net/2019/10/28/samesite-by-default-in-2020/). This change could have an impact on some sites, so it is important that you test this out. The changes are supposed to start rolling out in February (this month). The linked post shows how to force these defaults in both FireFox and Chrome.

In addition to this, Chrome has announced that it is going to start blocking mixed-content downloads (https://blog.chromium.org/2020/02/protecting-users-from-insecure.html). In this case, they are starting in Chrome 83 (June 2020) with blocking executable file downloads (.exe, .apk) that are over HTTP but requested from an HTTPS site.

The issue at hand is that users are mislead into thinking the download is secure due to the requesting page indicating it is over HTTPS. There isn’t a way for them to clearly see that the request is insecure. The linked Chrome blog describes a timeline of how they will slowly block all mixed-content types.

For many sites this might not be a huge concern, but this is a good time to check your sites to determine if you have any type of mixed content and ways to mitigate this.

You can identify mixed content on your site by using the Javascript Console. It can be found under the Developer Tools in your browser. This will prompt a warning when it identifies mixed content. There may also be some scanners you can use that will crawl your site looking for mixed content.

To help mitigate this from a high level, you could implement CSP to upgrade insecure requests:

Content-Security-Policy: upgrade-insecure-requests

This can help by upgrading insecure requests, but it is not supported in all browsers. The following post goes into a lot of detail on mixed content and some ways to resolve it: https://developers.google.com/web/fundamentals/security/prevent-mixed-content/fixing-mixed-content

The increase in protections of the browsers can help reduce the overall threats, but always remember that it is the developer’s responsibility to implement the proper design and protections. Not all browsers are the same and you can’t rely on the browser to provide all the protections.

Intro to npm-audit

June 27, 2018 by · 1 Comment
Filed under: Development, Security, Testing 

Our applications rely more and more on external packages to enable quick deployment and ease of development. While these packages help reduce the code we have to write ourselves, it still may present risk to our application.

If you are building Nodejs applications, you are probably using npm to manage your packages. For those that don’t know, npm is the node package manager. It is a direct source to quickly include functionality within your application. For example, say you want to hash your user passwords using bcrypt. To do that, you would grab a bcrypt package from npm. The following is just one of the bcrypt packages available:

https://www.npmjs.com/package/bcrypt

Each package we may use may also rely on other packages. This creates a fairly complex dependency graph of code used within your application you have no part in writing.

Tracking vulnerable components

It can be fairly difficult to identify issues related to these packages, never mind their sub packages. We all can’t run our own static analysis on each package we use, so identifying new vulnerabilities is not very easy. However, there are many tools that work to help identify known vulnerabilities in these packages.

When a vulnerability is publicly disclosed it receives an identifier (CVE). The vulnerability is tracked at https://cve.mitre.org/ and you can search these to identify what packages have known vulnerabilities. Manually searching all of your components doesn’t seem like the best approach.

Fortunately, npm actually has a module for doing just this. It is npm-audit. The package was included starting with npm 6.0. If you are using an earlier version of npm, you will not find it.

To use this module, you just need to be in your application directory (the same place you would do npm start) and just run:

npm audit.

On the surface, it is that simple. You can see the output of me running this on a small project I did below:

Npm audit

As you can see, it produces a report of any packages that may have known vulnerabilities. It also includes a few details about what that issue is.

To make this even better, some of the vulnerabilities found may actually be fixed automatically. If that is available, you can just run:

npm audit fix.

The full details of the different parameters can be found on the npm-audit page at https://docs.npmjs.com/cli/audit.

If you are doing node development or looking to automate identifying these types of issues, npm-audit may be worth a look. The more we can automate the better. Having something simple like this to quickly identify issues is invaluable. Remember, just because a component may be flagged as having a vulnerability, it doesn’t mean you are using that code or that your app is guaranteed vulnerable. Take the effort to determine the risk level for your application and organization. Of course, we should strive to be on the latest versions to avoid vulnerabilities, but we know reality diverts from what we wish for.

Have you been using npm-audit? Let me know. I am interested in your stories of success or failure to learn how others implement these things.

JavaScript in an HREF or SRC Attribute

November 30, 2017 by · Comments Off on JavaScript in an HREF or SRC Attribute
Filed under: Development, Security, Testing 

The anchor (<a>) HTML tag is commonly used to provide a clickable link for a user to navigate to another page. Did you know it is also possible to set the HREF attribute to execute JavaScript. A common technique is to use the onclick event of the anchor tab to execute a JavaScript method when the user clicks the link. However, to stop the browser from actually redirecting the HREF can be set to javascript:void(0);. This cancels the HREF functionality and allows the JavaScript from the onclick to execute as expected.

In the above example, notice that the HREF is set with a value starting with “javascript:”. This identifier tells the browser to execute the code following that prefix. For those that are security savvy, you might be thinking about cross-site scripting when you hear about executing JavaScript within the browser. For those of you that are new to security, cross-site scripting refers to the ability for an attacker to execute unintended JavaScript in the context of your application (https://www.owasp.org/index.php/Cross-site_Scripting_(XSS)).

I want to walk through a simple scenario of where this could be abused. In this scenario, the application will attempt to track the page the user came from to set up where the Cancel button will redirect to. Imagine you have a list page that allows you to view details of a specific item. When you click the item it takes you to that item page and passes a BackUrl in the query string. So the link may look like:

https://jardinesoftware.com/item.php?backUrl=/items.php

On the page, there is a hyperlink created that sets the HREF to the backUrl property, like below:

<a href=”<?php echo $_GET[“backUrl”];?>”>Back</a>

When the page executes as expected you should get an output like this:

<a href=”/items.php”>Back</a>

There is a big problem though. The application is not performing any type of output encoding to protect against cross-site scripting. If we instead pass in backUrl=”%20onclick=”alert(10); we will get the following output:

<a href=”” onclick=”alert(10);“>Back</a>

In the instance above, we have successfully inserted the onclick event by breaking out of the HREF attribute. The bold section identifies the malicious string we added. When this link is clicked it will prompt an alert box with the number 10.

To remedy this, we could (or typically) use output encoding to block the escape from the HREF attribute. For example, if we can escape the double quotes (” -> &quot; then we cannot get out of the HREF attribute. We can do this (in PHP as an example) using htmlentities() like this:

<a href=”<?php echo htmlentities($_GET[“backUrl”],ENT_QUOTES);?>”>Back</a>

When the value is rendered the quotes will be escapes like the following:

<a href=”&quot; onclick=&"alert(10);“>Back</a>

Notice in this example, the HREF actually has the entire input (in bold), rather than an onclick event actually being added. When the user clicks the link it will try to go to https://www.developsec.com/” onclick=”alert(10); rather than execute the JavaScript.

But Wait… JavaScript

It looks like we have solved the XSS problem, but there is a piece still missing. Remember at the beginning of the post how we mentioned the HREF supports the javascript: prefix? That will allow us to bypass the current encodings we have performed. This is because with using the javascript: prefix, we are not trying to break out of the HREF attribute. We don’t need to break out of the double quotes to create another attribute. This time we will set backUrl=javascript:alert(11); and we can see how it looks in the response:

<a href=”javascript:alert(11);“>Back</a>

When the user clicks on the link, the alert will trigger and display on the page. We have successfully bypassed the XSS protection initially put in place.

Mitigating the Issue

There are a few steps we can take to mitigate this issue. Each has its pros and many can be used in conjunction with each other. Pick the options that work best for your environment.

  • URL Encoding – Since the HREF is meant to be a URL, you could perform URL encoding. URL encoding will render the javascript benign in the above instances because the colon (:) will get encoded. You should be using URL encoding for URLs anyway, right?
  • Implement Content Security Policy (CSP) – CSP can help limit the ability for inline scripts to be executed. In this case, it is an inline script so something as simple as ‘Content-Security-Policy:default-src ‘self’ could be sufficient. Of course, implementing CSP requires research and great care to get it right for your application.
  • Validate the URL – It is a good idea to validate that the URL used is well formed and pointing to a relative path. If the system is unable to parse the URL then it should not be used and a default back URL can be substituted.
  • URL White Listing – Creating a white list of valid URLs for the back link can be effective at limiting what input is used by the end user. This can cut down on the values that are actually returned blocking any malicious scripts.
  • Remove javascript: – This really isn’t recommended as different encodings can make it difficult to effectively remove the string. The other techniques listed above are much more effective.

The above list is not exhaustive, but does give an idea of ways to help reduce the risk of JavaScript within the HREF attribute of a hyper link.

Iframe SRC

It is important to note that this situation also applies to the IFRAME SRC attribute. it is possible to set the SRC of an IFRAME using the javascript: notation. In doing so, the javascript executes when the page is loaded.

Wrap Up

When developing applications, make sure you take this use case into consideration if you are taking URLs from user supplied input and setting that in an anchor tag or IFrame SRC.

If you are responsible for testing applications, take note when you identify URLs in the parameters. Investigate where that data is used. If you see it is used in an anchor tag, look to see if it is possible to insert JavaScript in this manner.

For those performing static analysis or code review, look for areas where the HREF or SRC attributes are set with untrusted data and make sure proper encoding has been applied. This is less of a concern if the base path of the URL has been hard-coded and the untrusted input only makes up parameters of the URL. These should still be properly encoded.

SQL Injection: Calling Stored Procedures Dynamically

October 26, 2016 by · Comments Off on SQL Injection: Calling Stored Procedures Dynamically
Filed under: Development, Security, Testing 

It is not news that SQL Injection is possible within a stored procedure. There have been plenty of articles discussing this issues. However, there is a unique way that some developers execute their stored procedures that make them vulnerable to SQL Injection, even when the stored procedure itself is actually safe.

Look at the example below. The code is using a stored procedure, but it is calling the stored procedure using a dynamic statement.

	conn.Open();
        var cmdText = "exec spGetData '" + txtSearch.Text + "'";
        SqlDataAdapter adapter = new SqlDataAdapter(cmdText, conn);
        DataSet ds = new DataSet();
        adapter.Fill(ds);
        conn.Close();
        grdResults.DataSource = ds.Tables[0];
        grdResults.DataBind();

It doesn’t really matter what is in the stored procedure for this particular example. This is because the stored procedure is not where the injection is going to occur. Instead, the injection occurs when the EXEC statement is concatenated together. The email parameter is being dynamically added in, which we know is bad.

This can be quickly tested by just inserting a single quote (‘) into the search field and viewing the error message returned. It would look something like this:

System.Data.SqlClient.SqlException (0x80131904): Unclosed quotation mark after the character string ”’. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) at System.Data.SqlClient.SqlDataReader.TryConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at

With a little more probing, it is possible to get more information leading us to understand how this SQL is constructed. For example, by placing ‘,’ into the search field, we see a different error message:

System.Data.SqlClient.SqlException (0x80131904): Procedure or function spGetData has too many arguments specified. at System.Data.SqlClient.SqlConnection.

The mention of the stored procedure having too many arguments helps identify this technique for calling stored procedures.

With SQL we have the ability to execute more than one query in a given transaction. In this case, we just need to break out of the current exec statement and add our own statement. Remember, this doesn’t effect the execution of the spGetData stored procedure. We are looking at the ability to add new statements to the request.

Lets assume we search for this:

james@test.com’;SELECT * FROM tblUsers–

this would change our cmdText to look like:

exec spGetData’james@test.com’;SELECT * FROM tblUsers–‘

The above query will execute the spGetData stored procedure and then execute the following SELECT statement, ultimately returning 2 result sets. In many cases, this is not that useful for an attacker because the second table would not be returned to the user. However, this doesn’t mean that this makes an attack impossible. Instead, this turns our attacks more towards what we can Do, not what can we receive.

At this point, we are able to execute any commands against the SQL Server that the user has permission too. This could mean executing other stored procedures, dropping or modifying tables, adding records to a table, or even more advanced attacks such as manipulating the underlying operating system. An example might be to do something like this:

james@test.com’;DROP TABLE tblUsers–

If the user has permissions, the server would drop tblUsers, causing a lot of problems.

When calling stored procedures, it should be done using command parameters, rather than dynamically. The following is an example of using proper parameters:

    conn.Open();
    SqlCommand cmd = new SqlCommand();
    cmd.CommandText = "spGetData";
    cmd.CommandType = CommandType.StoredProcedure;
    cmd.Connection = conn;
    cmd.Parameters.AddWithValue("@someData", txtSearch.Text);
    SqlDataAdapter adapter = new SqlDataAdapter(cmd);
    DataSet ds = new DataSet();
    adapter.Fill(ds);
    conn.Close();
    grdResults.DataSource = ds.Tables[0];
    grdResults.DataBind();

The code above adds parameters to the command object, removing the ability to inject into the dynamic code.

It is easy to think that because it is a stored procedure, and the stored procedure may be safe, that we are secure. Unfortunately, simple mistakes like this can lead to a vulnerability. Make sure that you are properly making database calls using parameterized queries. Don’t use dynamic SQL, even if it is to call a stored procedure.

XXE and .Net

May 26, 2016 by · Comments Off on XXE and .Net
Filed under: Development, Security 

XXE, or XML External Entity, is an attack against applications that parse XML. It occurs when XML input contains a reference to an external entity that it wasn’t expected to have access to. Through this article, I will discuss how .Net handles XML for certain objects and how to properly configure these objects to block XXE attacks. It is important to understand that the different versions of the .Net framework handle this differently. I will point out the differences for each object.

I will cover the XmlReader, XmlTextReader, and XMLDocument. Here is a quick summary regarding the default settings:

Object Safe by Default?
XmlReader
Prior to 4.0 Yes
4.0 + Yes
XmlTextReader
Prior to 4.0 No
4.0 + No
XmlDocument
4.5 and Earlier No
4.6 Yes

XMLReader

Prior to 4.0

The ProhibitDtd property is used to determine if a DTD will be parsed.

  • True (default) – throws an exception if a DTD is identified. (See Figure 1)
  • False – Allows parsing the DTD. (Potentially Vulnerable)

Code that throws an exception when a DTD is processed: – By default, ProhibitDtd is set to true and will throw an exception when an Entity is referenced.

static void Reader()
{
    string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";

    XmlReader myReader = XmlReader.Create(new StringReader(xml));
            
    while (myReader.Read())
    {
        Console.WriteLine(myReader.Value);
    }
    Console.ReadLine();
}

Exception when executed:

[Figure 1]

XXE 1

Code that allows a DTD to be processed: – Using the XmlReaderSettings object, it is possible to allow the parsing of the entity. This could make your application vulnerable to XXE.

static void Reader()
{
    string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";

    XmlReaderSettings rs = new XmlReaderSettings();

    rs.ProhibitDtd = false;

    XmlReader myReader = XmlReader.Create(new StringReader(xml),rs);
            
    while (myReader.Read())
    {
        Console.WriteLine(myReader.Value);
    }
    Console.ReadLine();
}

Output when executed showing injected text:

[Figure 2]

XXE 2

.Net 4.0+
In .Net 4.0, they made a change from using the ProhibitDtD property to the new DtdProcessing enumeration. There are now three (3) options:

  • Prohibit (default) – Throws an exception if a DTD is identified.
  • Ignore – Ignores any DTD specifications in the document, skipping over them and continues processing the document.
  • Parse – Will parse any DTD specifications in the document. (Potentially Vulnerable)

Code that throws an exception when a DTD is processed: – By default, the DtdProcessing is set to Prohibit, blocking any external entities and creating safe code.

static void Reader()
{
    string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";

    XmlReader myReader = XmlReader.Create(new StringReader(xml));
            
    while (myReader.Read())
    {
        Console.WriteLine(myReader.Value);
    }
    Console.ReadLine();
}

Exception when executed:

[Figure 3]

XXE 3

Code that ignores DTDs and continues processing: – Using the XmlReaderSettings object, setting DtdProcessing to Ignore will skip processing any entities. In this case, it threw an exception because there was a reference to the entirety that was skipped.

static void Reader()
{
    string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";

    XmlReaderSettings rs = new XmlReaderSettings();
    rs.DtdProcessing = DtdProcessing.Ignore;

    XmlReader myReader = XmlReader.Create(new StringReader(xml),rs);
            
    while (myReader.Read())
    {
        Console.WriteLine(myReader.Value);
    }
    Console.ReadLine();
}

Output when executed ignoring the DTD (Exception due to trying to use the unprocessed entity):

[Figure 4]

XXE 4

Code that allows a DTD to be processed: Using the XmlReaderSettings object, setting DtdProcessing to Parse will allow processing the entities. This potentially makes your code vulnerable.

static void Reader()
{
    string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";
			
    XmlReaderSettings rs = new XmlReaderSettings();
    rs.DtdProcessing = DtdProcessing.Parse;

    XmlReader myReader = XmlReader.Create(new StringReader(xml),rs);
            
    while (myReader.Read())
    {
        Console.WriteLine(myReader.Value);
    }
    Console.ReadLine();           
}

Output when executed showing injected text:

[Figure 5]

XXE 5


XmlTextReader

The XmlTextReader uses the same properties as the XmlReader object, however there is one big difference. The XmlTextReader defaults to parsing XML Entities so you need to explicitly tell it not too.

Prior to 4.0

The ProhibitDtd property is used to determine if a DTD will be parsed.

  • True – throws an exception if a DTD is identified. (See Figure 1)
  • False (Default) – Allows parsing the DTD. (Potentially Vulnerable)

Code that allows a Dtd to be processed: (Potentially Vulnerable) – By default, the XMLTextReader sets the ProhibitDtd property to False, allowing entities to be parsed and the code to potentially be vulnerable.

static void TextReader()
{
    string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";

    XmlTextReader myReader = new XmlTextReader(new StringReader(xml));

    while (myReader.Read())
    {
         if (myReader.NodeType == XmlNodeType.Element)
         {
             Console.WriteLine(myReader.ReadElementContentAsString());
         }
    }
    Console.ReadLine();
}

Code that blocks the Dtd from being parsed and throws an exception: – Setting the ProhibitDtd property to true (explicitly) will block Dtds from being processed making the code safe from XXE. Notice how the XmlTextReader has the ProhibitDtd property directly, it doesn’t have to use the XmlReaderSettings object.

static void TextReader()
{
    string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";

    XmlTextReader myReader = new XmlTextReader(new StringReader(xml));

    myReader.ProhibitDtd = true;

    while (myReader.Read())
    {
       if (myReader.NodeType == XmlNodeType.Element)
       {
           Console.WriteLine(myReader.ReadElementContentAsString());
       }
    }
    Console.ReadLine();
}

4.0+

In .Net 4.0, they made a change from using the ProhibitDtD property to the new DtdProcessing enumeration. There are now three (3) options:

  • Prohibit – Throws an exception if a DTD is identified.
  • Ignore – Ignores any DTD specifications in the document, skipping over them and continues processing the document.
  • Parse (Default) – Will parse any DTD specifications in the document. (Potentially Vulnerable)

Code that allows a DTD to be processed: (Vulnerable) – By default, the XMLTextReader sets the DtdProcessing to Parse, making the code potentially vulnerable to XXE.

static void TextReader()
{
    string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";

    XmlTextReader myReader = new XmlTextReader(new StringReader(xml));

    while (myReader.Read())
    {
        if (myReader.NodeType == XmlNodeType.Element)
        {
            Console.WriteLine(myReader.ReadElementContentAsString());
        }
    }
    Console.ReadLine();
}

Code that blocks the Dtd from being parsed: – To block entities from being parsed, you must explicitly set the DtdProcessing property to Prohibit or Ignore. Note that this is set directly on the XmlTextReader and not through the XmlReaderSettings object.

static void TextReader()
{
    string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";

    XmlTextReader myReader = new XmlTextReader(new StringReader(xml));
			
    myReader.DtdProcessing = DtdProcessing.Prohibit;

    while (myReader.Read())
    {
         if (myReader.NodeType == XmlNodeType.Element)
         {
             Console.WriteLine(myReader.ReadElementContentAsString());
         }
    }
    Console.ReadLine();
}

Output when Dtd is prohibited:

[Figure 6]

XXE 6


XMLDocument

For the XMLDocument, you need to change the default XMLResolver object to prohibit a Dtd from being parsed.

.Net 4.5 and Earlier

By default, the XMLDocument sets the URLResolver which will parse Dtds included in the XML document. To prohibit this, set the XmlResolver = null.

Code that does not set the XmlResolver properly (potentially vulnerable) – The default XMLResolver will parse entities, making the following code potentially vulnerable.

static void Load()
{
     string fileName = @"C:\Users\user\Documents\test.xml";

     XmlDocument xmlDoc = new XmlDocument();

     xmlDoc.Load(fileName);

     Console.WriteLine(xmlDoc.InnerText);

     Console.ReadLine();
}

Code that does set the XmlResolver to null, blocking any Dtds from executing: – To block entities from being parsed, you must explicitly set the XmlResolver to null. This example uses LoadXml instead of Load, but they both work the same in this case.

static void LoadXML()
{
     string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";

     XmlDocument xmlDoc = new XmlDocument();

     xmlDoc.XmlResolver = null;

     xmlDoc.LoadXml(xml);

     Console.WriteLine(xmlDoc.InnerText);

     Console.ReadLine();
}

.Net 4.6

It appears that in .Net 4.6, the XMLResolver is defaulted to Null, making the XmlDocument safe. However, you can still set the XmlResolver in a similar way as prior to 4.6 (see previous code snippet).

Open Redirect – Bad Implementation

January 14, 2016 by · 1 Comment
Filed under: Security 

I was recently looking through some code and happen to stumble across some logic that is attempting to prohibit the application from redirecting to an external site. While this sounds like a pretty simple task, it is common to see it incorrectly implemented. Lets look at the check that is being performed.


	string url = Request.QueryString["returnUrl"];

	if (string.IsNullOrWhiteSpace(url) || !url.StartsWith("/"))
	{
		Response.Redirect("~/default.aspx");
	}
	else
	{
		Response.Redirect(url);
	}

The first thing I noticed was the line that checks to see if the url starts with a “/” characters. This is a common mistake when developers try to stop open redirection. The assumption is that to redirect to an external site one would need the protocol. For example, http://www.developsec.com. By forcing the url to start with the “/” character it is impossible to get the “http:” in there. Unfortunately, it is also possible to use //www.developsec.com as the url and it will also be interpreted as an absolute url. In the example above, by passing in returnUrl=//www.developsec.com the code will see the starting “/” character and allow the redirect. The browser would interpret the “//” as absolute and navigate to www.developsec.com.

After putting a quick test case together, I quickly proved out the point and was successful in bypassing this logic to enable a redirect to external sites.

Checking for Absolute or Relative Paths

ASP.Net has build in procedures for determining if a path is relative or absolute. The following code shows one way of doing this.

	string url = Request.QueryString["returnUrl"];
	Uri result;
    bool isAbsolute = false;

    isAbsolute = Uri.TryCreate(returnUrl, UriKind.Absolute, out result);

    if (!isAbsolute)
    {
         Response.Redirect(url);
    }
    else
    {
         Response.Redirect("~/default.aspx");
    }

In the above example, if the URL is absolute (starts with a protocol, http/https, or starts with “//”) it will just redirect to the default page. If the url is not absolute, but relative, it will redirect to the url passed in.

While doing some research I came across a recommendation to use the following:

	if (Uri.IsWellFormedUriString(returnUrl,UriKind.Relative))

When using the above logic, it flagged //www.developsec.com as a relative path which would not be what we are looking for. The previous logic correctly identified this as an absolute url. There may be other methods of doing this and MVC provides some other functions as well that we will cover in a different post.

Conclusion

Make sure that you have a solid understanding of the problem and the different ways it works. It is easy to overlook some of these different techniques. There is a lot to learn, and we should be learning every day.

ASP.Net Insufficient Session Timeout

October 6, 2015 by · Comments Off on ASP.Net Insufficient Session Timeout
Filed under: Development, Security, Testing 

A common security concern found in ASP.Net applications is Insufficient Session Timeout. In this article, the focus is not on the ASP.Net session that is not effectively terminated, but rather the forms authentication cookie that is still valid after logout.

How to Test

  • User is currently logged into the application.
  • User captures the ASPAuth cookie (name may be different in different applications).
    • Cookie can be captured using a browser plugin or a proxy used for request interception.
  • User saves the captured cookie for later use.
  • User logs out of the application.
  • User requests a page on the application, passing the previously captured authentication cookie.
  • The page is processed and access is granted.

Typical Logout Options

  • The application calls FormsAuthentication.Signout()
  • The application sets the Cookie.Expires property to a previous DateTime.

Cookie Still Works!!

Following the user process above, the cookie still provides access to the application as if the logout never occurred. So what is the deal? The key is that unlike a true “session” which is maintained on the server, the forms authentication cookie is self contained. It does not have a server side component to stay in sync with. Among other things, the authentication cookie has your username or ID, possibly roles, and an expiration date. When the cookie is received by the server it will be decrypted (please tell me you are using protection = all) and the data extracted. If the cookie’s internal expiration date has not passed, the cookie is accepted and processed as a valid cookie.

So what did FormsAuthentation.Signout() do?

If you look under the hood of the .Net framework, it has been a few years but I doubt much has changed, you will see that FormsAuthentication.Signout() really just removes the cookie from the browser. There is no code to perform any server function, it merely asks the browser to remove it by clearing the value and back-dating the expires property. While this does work to remove the cookie from the browser, it doesn’t have any effect on a copy of the original cookie you may have captured. The only sure way to really make the cookie inactive (before the internal timeout occurs) would be to change your machine key in the web.config file. This is not a reasonable solution.

Possible Mitigations

You should be protecting your cookie by setting the httpOnly and Secure properties. HttpOnly tells the browser not to allow javascript to have access to the cookie value. This is an important step to protect the cookie from theft via cross-site scripting. The secure flag tells the browser to only send the authentication cookie over HTTPS, making it much more difficult for an attacker to intercept the cookie as it is sent to the server.

Set a short timeout (15 minutes) on the cookie to decrease the window an attacker has to obtain the cookie.

You could attempt to build a tracking system to manage the authentication cookie on the server to disable it before its time has expired. Maybe something for another post.

Understand how the application is used to determine how risky this issue may be. If the application is not used on shared/public systems and the cookie is protected as mentioned above, the attack surface is significantly decreased.

Final Thoughts

If you are facing this type of finding and it is a forms authentication cookie issue, not the Asp.Net session cookie, take the time to understand the risk. Make sure you understand the settings you have and the priority and sensitivity of the application to properly understand “your” risk level. Don’t rely on third party risk ratings to determine how serious the flaw is. In many situations, this may be a low priority, however in the right app, this could be a high priority.

Static Analysis: Analyzing the Options

April 5, 2015 by · Comments Off on Static Analysis: Analyzing the Options
Filed under: Development, Security, Testing 

When it comes to automated testing for applications there are two main types: Dynamic and Static.

  • Dynamic scanning is where the scanner is analyzing the application in a running state. This method doesn’t have access to the source code or the binary itself, but is able to see how things function during runtime.
  • Static analysis is where the scanner is looking at the source code or the binary output of the application. While this type of analysis doesn’t see the code as it is running, it has the ability to trace how data flows the the application down to the function level.

An important component to any secure development workflow, dynamic scanning analyzes a system as it is running. Before the application is running the focus is shifted to the source code which is where static analysis fits in. At this state it is possible to identify many common vulnerabilities while integrating into your build processes.

If you are thinking about adding static analysis to your process there are a few things to think about. Keep in mind there is not just one factor that should be the decision maker. Budget, in-house experience, application type and other factors will combine to make the right decision.

Disclaimer: I don’t endorse any products I talk about here. I do have direct experience with the ones I mention and that is why they are mentioned. I prefer not to speak to those products I have never used.

Budget

I hate to list this first, but honestly it is a pretty big factor in your implementation of static analysis. The vast options that exist for static analysis range from FREE to VERY EXPENSIVE. It is good to have an idea of what type of budget you have at hand to better understand what option may be right.

Free Tools

There are a few free tools out there that may work for your situation. Most of these tools depend on the programming language you use, unlike many of the commercial tools that support many of the common languages. For .Net developers, CAT.Net is the first static analysis tool that comes to mind. The downside is that it has not been updated in a long time. While it may still help a little, it will not compare to many of the commercial tools that are available.

In the Ruby world, I have used Brakeman which worked fairly well. You may find you have to do a little fiddling to get it up and running properly, but if you are a Ruby developer then this may be a simple task.

Managed Services or In-House

Can you manage a scanner in-house or is this something better delegated to a third party that specializes in the technology?

This can be a difficult question because it may involve many facets of your development environment. Choosing to host the solution in-house, like HP’s Fortify SCA may require a lot more internal knowledge than a managed solution. Do you have the resources available that know the product or that can learn it? Given the right resources, in-house tools can be very beneficial. One of the biggest roadblocks to in-house solutions is related to the cost. Most of them are very expensive. Here are a few in-house benefits:

  • Ability to integrate directly into your Continuous Integration (CI) operations
  • Ability to customize the technology for your environment/workflow
  • Ability to create extensions to tune the results

Choosing to go with a managed solution works well for many companies. Whether it is because the development team is small, resources aren’t available or budget, using a 3rd party may be the right solution. There is always the question as to whether or not you are ok with sending your code to a 3rd party or not, but many are ok with this to get the solution they need. Many of the managed services have the additional benefit of reducing false positives in the results. This can be one of the most time consuming pieces of a static analysis tool, right there with getting it set up and configured properly. Some scans may return upwards of 10’s of thousands of results. Weeding through all of those can be very time consuming and have a negative effect on the poor person stuck doing it. Having a company manage that portion can be very beneficial and cost effective.

Conclusion

Picking the right static analysis solution is important, but can be difficult. Take the time to determine what your end goal is when implementing static analysis. Are you looking for something that is good, but not customizable to your environment, or something that is highly extensible and integrated closely with your workflow? Unfortunately, sometimes our budget may limit what we can do, but we have to start someplace. Take the time to talk to other people that have used the solutions you are looking at. Has their experience been good? What did/do they like? What don’t they like? Remember that static analysis is not the complete solution, but rather a component of a solution. Dropping this into your workflow won’t make you secure, but it will help decrease the attack surface area if implemented properly.

A Pen Test is Coming!!

October 18, 2014 by · Comments Off on A Pen Test is Coming!!
Filed under: Development, Security, Testing 

You have been working hard to create the greatest app in the world.  Ok, so maybe it is just a simple business application, but it is still important to you.  You have put countless hours of hard work into creating this master piece.  It looks awesome, and does everything that the business has asked for.  Then you get the email from security: Your application will undergo a penetration test in two weeks.  Your heart skips a beat and sinks a little as you recall everything you have heard about this experience.  Most likely, your immediate action is to go on the defensive.  Why would your application need a penetration test?  Of course it is secure, we do use HTTPS.  No one would attack us, we are small.  Take a breath..  it is going to be alright.

All too often, when I go into a penetration test, the developers start on the defensive.  They don’t really understand why these ‘other’ people have to come in and test their application.  I understand the concerns.   History has shown that many of these engagements are truly considered adversarial.  The testers jump for joy when they find a security flaw.  They tell you how bad the application is and how simple the fix is, leading to you feeling about the size of an ant.  This is often due to a lack of good communication skills.

Penetration testing is adversarial.  It is an offensive assessment to find security weaknesses in your systems.  This is an attempt to simulate an attacker against your system.  Of course there are many differences, such as scope, timing and rules, but the goal is the same.  Lets see what we can do on your system.  Unfortunately, I find that many testers don’t have the communication skills to relay the information back to the business and developers in a way that is positive.  I can’t tell you how may times I have heard people describe their job as great because they get to come in, tell you how bad you suck and then leave.  If that is your penetration tester, find a new one.  First, that attitude breaks down the communication with the client and doesn’t help promote a secure atmosphere.  We don’t get anywhere by belittling the teams that have worked hard to create their application.  Second, a penetration test should provide solid recommendations to the client on how they can work to resolve the issues identified.  Just listing a bunch of flaws is fairly useless to a company.

These engagements should be worth everyone’s time.  There should be positive communication between the developers and the testing team.  Remember that many engagements are short lived so the more information you can provide the better the assessment you are going to get.  The engagement should be helpful.  With the right company, you will get a solid assessment and recommendations that you can do something with.  If you don’t get that, time to start looking at another company for testing.  Make sure you are ready for the test.   If the engagement requires an environment to test in, have it all set up.  That includes test data (if needed).   The testers want to hit the ground running.  If credentials are needed, make sure those are available too.  The more help you can be, the more you will benefit from the experience.

As much as you don’t want to hear it, there is a very high chance the test will find vulnerabilities.  While it would be great if applications didn’t have vulnerabilities, it is fairly rare to find them.  Use this experience to learn and train on security issues. Take the feedback as constructive criticism, not someone attacking you.   Trust me, you want the pen testers to find these flaws before a real attacker does.

Remember that this is for your benefit.  We as developers also need to stay positive.  The last thing you want to do is challenge the pen testers saying your app is not vulnerable.  The teams that usually do that are the most vulnerable. Say positive and it will be a great learning experience.