ASP.Net and CSRF

January 7, 2013 by · Comments Off on ASP.Net and CSRF
Filed under: Development, Security 

Cross-site request forgery (CSRF) is a very common vulnerability today.  Like most frameworks, ASP.Net is not immune by default.  There are some features that are built-in that can be enabled to help reduce the surface area of this attack, however we need to be aware of how they work and what situations they may not work in.  First, lets provide a quick review of what CSRF is.

CRSF Overview

CSRF is really a result of the browser’s willingness to submit cookies to the server they are associated with.  Of course, this has to happen so that your sessions stay active and you don’t have to enter your credentials on every request.  The problem is when the request to one site (say site A) is made by a second site (say site B) without your permission or interaction.  Lets take a look at a quick example of what we are talking about. 

Say we have a site that allows deleting users by the administrator.   The administrator is provided with a table listing all the users and a link to delete each one.   That link looks something like this:

http://www.jardinesoftware.com/delete.aspx?userid=6

When an administrator clicks this link, the application first checks to see if he is authenticated (using the session or authentication cookie) and authorized.  It then calls the functions responsible to delete the user with the id of 6.

The issue here, is that there is nothing unique in this request.  If another users (without administrative rights) learns of this URL then it may be possible to get an administrator to run it for him.  Here is how that works:

  1. The attacker crafts the request he wants to be executed by the administrator.  In this case, it could be http://www.jardinesoftware.com/delete.aspx?userid=8
  2. The attacker sends the administrative email an email containing this link (probably obfuscated so they don’t realize it is calling the delete function on the website.  The attacker could also send a link to a seemingly innocent site, but contain an image tag containing a src attribute pointing to the delete link above.
  3. The victim (administrator) MUST already be logged into the application when he clicks the link. 
  4. The browser sees the request for JardineSoftware.com and kindly appends the cookies for that domain.
  5. The server receives the cookies (for authentication) and the request and processes it.  The server has no way of knowing that the user didn’t actually initiate the request on purpose.
  6. The user is deleted and the admin is none the wiser.

To resolve this, we need something to make the request unique.  When the request is unique, it is difficult for the attacker to know that unique value for the victim so the request will fail.  With the presence of XSS, many times this can be bypassed.  There are many different ways to do this, which we will cover next.  Keep in mind that each solution described below has its pros and cons that we must be aware of.

ViewState

.Net web forms have a feature called ViewState which allows storing state information on the client.  You typically don’t see this, because it is a hidden field (unless you view source).  ViewState is not guaranteed to be unique across users.  At times, there may be unique values in there that we don’t think about (the user name), but for the most part, ViewState is not sufficient to protect against CSRF.  Enter ViewStateUserKey.  ViewStateUserKey provides a unique value within the Viewstate per user session.  ViewStateUserKey is not enabled by default, and must be set by the developer.  This property can be set in the Page_Init event on each page or in the master page.  The following is an example of how this can be set:

protected void Page_Init(object sender, EventArgs e)
{
        Page.ViewStateUserKey = Session.SessionId;
}

Microsoft has made some changes in Visual Studio 2012.  If you create a new Web Forms application, it will include some additional CSRF changes to help mitigate the issue out of the box.  The following shows an example of the new Master Page Init method (non-relevant code has been removed):

protected void Page_Init(object sender, EventArgs e)
{
    // The code below helps to protect against XSRF attacks
    var requestCookie = Request.Cookies[AntiXsrfTokenKey];

    if (requestCookie != null && Guid.TryParse(requestCookie.Value, out requestCookieGuidValue))
    {
        // Use the Anti-XSRF token from the cookie
        _antiXsrfTokenValue = requestCookie.Value;
        Page.ViewStateUserKey = _antiXsrfTokenValue;
    }
    else
    {
        // Generate a new Anti-XSRF token and save to the cookie
        _antiXsrfTokenValue = Guid.NewGuid().ToString("N");
        Page.ViewStateUserKey = _antiXsrfTokenValue;
    }
}

In the code above, we can see that the ViewStateUserKey is now being set in the Master Page by default.  What a great addition.  So what are the limits here?

For starters, and hopefully this is obvious, this technique doesn’t work on requests that don’t use ViewState.  Remember the example we used earlier in the post?  There is no ViewState there, so this doesn’t offer any protection for that situation.  There are also some other situations that could lead to this not working.  In .Net 2.0, with EventValidation disabled, ViewStateUserKey would not get validated if the ViewState is empty.   I have discussed the ability to pass an empty viewstate before, and this is one of the perks.  Many times, ViewState may be present, but the developers do not need it for the processing of the request.   If we pass it as __VIEWSTATE=   with no content then in 2.0, ViewStateUserKey will not get checked.   This was changed in .Net 4.0 where the framework now checks if ViewStateUserKey was set and will check it even if the ViewState is empty.  This doesn’t effect requests that don’t use ViewState like the example above.

Nonce or Anti-Forgery Token

Another technique that can be used to protect requests from CSRF is what is called a ‘Nonce’.  A Nonce is a single use token that gets included with every request.  This token is only known to the user and changes for each request.  The idea is that only the requestor of the page with have a valid token to submit the action.  In our example above, a new parameter would need to exist such as this:

http://www.jardinesoftware.com/delete.aspx?userid=6&antiforgery=GJ38r4Elke7823SERw

Yes, that value for the antiforgery parameter is just made up.  It should be random so it is not guessable by an attacker.  This would limit an attacker’s ability to know what YOUR request is to the resource.  This is a great way to mitigate CSRF, but can be tricky to implement.  ASP.Net MVC has built in functionality for this.  For Web forms, you either have to build it, or you can look to OWASP at their CSRFGuard project.   I am not sure how stable it is for .Net, but it could be a good starting point.

In Visual Studio 2012, the default WebForm application template attempts to add anti-csrf functionality in to the master page.  The following code snippet shows some of the code that does this. (Code has been removed that is not relevant and there is supporting code that is not present from other functions):

protected void Page_Init(object sender, EventArgs e)
{
    // The code below helps to protect against XSRF attacks
    var requestCookie = Request.Cookies[AntiXsrfTokenKey];
    Guid requestCookieGuidValue;
    if (requestCookie != null && Guid.TryParse(requestCookie.Value, out requestCookieGuidValue))
    {
        // Use the Anti-XSRF token from the cookie
        _antiXsrfTokenValue = requestCookie.Value;
    }
    else
    {
        // Generate a new Anti-XSRF token and save to the cookie
        _antiXsrfTokenValue = Guid.NewGuid().ToString("N");

        var responseCookie = new HttpCookie(AntiXsrfTokenKey)
        {
            HttpOnly = true,
            Value = _antiXsrfTokenValue
        };
        if (FormsAuthentication.RequireSSL && Request.IsSecureConnection)
        {
            responseCookie.Secure = true;
        }
        Response.Cookies.Set(responseCookie);
    }
}

In the code above, we can see the anti-csrf value being generated and stored in the cookies collection.  Not shown, the anti-csrf value is also stored in the ViewState as well.  This does show that Microsoft is making an attempt to help developer’s protect their applications by providing default implementations like this.

Require Credentials

Another technique that is often employed is to require the user to re-enter their credentials before performing a sensitive transaction.  In the example above, when the administrator clicked the link, he would need to enter his password before the delete occurred.  This is a fairly effective approach because the attacker doesn’t know the administrator’s password (I hope).  The downside to this technique is that users may get upset if they constantly have to re-enter their credentials.  By placing too much burden on the users, they may decide to use something else. This must be used sparingly. 

CAPTCHA

CAPTCHAs can also be used to help protect against CSRF.  Again, this is a unique value per user that the attacker should not know.  Like the Credentials solution, it does require more work by the user.  Check out Rafal Los’ post on Is unusable the same as ‘secure’? Why security is borked.  where he points out the difficulties with a CAPTCHA system when human’s can’t read them.

Use POST Requests

This is not really a mitigation, but more of a recommendation.  CSRF can be performed on POST requests too.  I just wanted to mention this here to cover it, but this really doesn’t have much weight.  All an attacker needs to do is get the victim to visit a page with a hidden form containing the attack request and use JavaScript to auto-submit that form behind the scenes.

Check the Referrer

This is similar to the use of POST requests.  It can be bypassed with the proper technologies and isn’t a full solution.  This is another item that needs to be implemented properly.  I have seen situations where this has been implemented on pages that were ok to access directly and it caused issues. 

Conclusion

As you can see, there are many different ways, including more than listed, to protect against CSRF.  Microsoft has implemented some nice new changes into the default Visual Studio 2012 Web Form template to help protect against CSRF by default.  It is important to understand what the implementation is and its limits of protection.  Without this understanding it is easy to overlook a situation where your application could be vulnerable.

SQL Injection in 2013: Lets Work Together to Remediate

January 4, 2013 by · Comments Off on SQL Injection in 2013: Lets Work Together to Remediate
Filed under: Development, Security 

We just started 2013 and SQL Injection has been a vulnerability plaguing us for over 10 years.  It is time to take action.  Not that we haven’t been taking action, but it is still prevalent in web applications.  We need to set attainable goals.  Does it seem attainable that we say we will eradicate all SQL Injection in 2013?  Probably not.  This is mostly due to legacy applications and the difficulty in modifying their code. There are some goals we can do to stop writing new code vulnerable to SQL Injection.  Fortunately, this is not a vulnerability that is not well understood.  Here are some thoughts for moving forward.

Don’t write SQL Injection Code

OK, this sounds like what everyone is saying, and it is.  Is it difficult to do? No.   Like anything, this is something we need to commit to and consciously make an effort to do.  Proper SQL Queries are not difficult.  Using parameterized queries is easy to do in most languages now.  Here is a quick example of a parameterized query in .Net:

using (SqlConnection cn = new SqlConnection())
{
    using (SqlCommand cmd = new SqlCommand())
    {
        string query = "SELECT fName,lName from Users WHERE fName = @fname";
        cmd.CommandText = query;
        cmd.CommandType = System.Data.CommandType.Text;
        cmd.Parameters.AddWithValue("@fname", untrustedInput);
        cmd.Connection = cn;
        cmd.Connection.Open();
        cmd.ExecuteNonQuery();
    }
}

What about stored procedures?  Stored procedures are good, but can be vulnerable to SQL Injection.  This is most common when you generate dynamic queries from within the stored procedure.  Yes, the parameters are passed to the procedure properly, but then used in an insecure way inside the procedure.  If you are unsure if your procedures are vulnerable, look for the use of EXEC or other SQL commands that run SQL code and make sure parameters are handled properly.

Often overlooked is how a Stored Procedure is called.  You are using a stored procedure but calling it like so:

string query = "EXEC spGetUser '" + untrustedinput + "'";

The above query can still be vulnerable to SQL Injection by chaining onto the EXEC statement.  So even though the stored procedure may be secure, an attacker may be able to run commands (just not see the output). 

The key to not writing vulnerable code is to not write it ever.  Whether it is a proof of concept, just some test code, or actual production code, take the time to use secure methods.  This secure way will be second nature and SQL injection reduction will be on its way.

Supportively Spread the Word

The key here is Supportively!!  Yes, we have been talking about SQL injection for years, but have we been doing it the right way, to the right people?  First, enough with the “Developers Suck”, “Your code sucks!” nonsense.  This is not productive and is probably much more destructive to the relationship between security and developers.  Second, security practitioners meet up at their cons and talk about this all year long.  This may sound crazy, but it is not the security practitioners that are writing the code.  We need to get the information into the developer’s hands and minds.  Just throwing the information on a blog (like this) or on a website like OWASP or SANS is not enough to get developers the information.  I don’t even want to guess at the number of developers that have never even heard of OWASP, but I would venture it is higher than you think.  Everyone needs to help spread the word.  Security is talking about it, developers need to be talking about it.  Major development conferences rarely have any content that is security related, that needs to change.  It needs to be thrown in everyone’s lap.   If you see someone writing something insecure, let them know so they can learn.  We can’t assume everyone knows everything.

Lets start including the secure way of writing SQL Queries in our tutorials, books, classes so all we see is the right way to do it.  I mentioned this a year or so ago and everyone cried that it would make the code samples in books and tutorials too long and impossible to follow.  First, I disagree that it would be that detrimental.  Second, where do developers get a lot of their code? From tutorials, samples, books, classes.  We don’t reinvent the wheel when we need to do something.  We look for someone that has done it, take the concept, make modifications to work in our situation and run with it.  All too often, this leads to a lot of vulnerabilities because we refuse to write secure code that is put out for anyone to use. We all need to get better at this.  And if you are the author, maybe it adds a few pages to your book ;).  

Take Responsibility

We can no longer blame others for the code we write.  Maybe the code was copied from an online resource.  As soon as it is in your paws, it is your code now.  It is not the fault of MSSQL or Oracle because they allow you to write dynamic  SQL queries.  This is the power of the system, and some people may just use it.  It is our responsibility to know how to use the systems we have. Many frameworks now try to help stop SQL Injection by default.  If you are relying on frameworks, know how they work, and keep them patched.  We just saw Ruby release a patch to fix a SQL Injection issue. 

Conclusion

So maybe this was a lot of rambling, or maybe it will mean something and get a few people thinking about defending against SQL Injection.  I apologize for some small tangents, those are part of another post that will be coming soon.  The purpose of this post is to start setting some goals that we can achieve in 2013.   Not everyone can eat an entire apple in one bite, so lets take some small bites and really chew on them for the year.  Lets focus on what we can do and do it well.

Authorization: Bad Implementation

January 3, 2013 by · Comments Off on Authorization: Bad Implementation
Filed under: Development, Security, Testing 

A few years ago, I joined a development team and got a chance to poke around a little bit for security issues.  For a team that didn’t think much about security, it didn’t take long to identify some serious vulnerabilities.  One of those issues that I saw related to authorization for privileged areas of the application. Authorization is a critical control when it comes to protecting your applications because you don’t want unauthorized users performing actions they should not be able to perform. 

The application was designed using security by obscurity: that’s right, if we only display the administrator navigation panel to administrators, no one will ever find the pages.   There was no authorization check performed on the page itself.  If you were an administrator, the page displayed the links that you could click.  If you were not an administrator, no links.

In the security community, we all know (or should know), that this is not acceptable.  Unfortunately, we are still working to get all of our security knowledge into the developers’ hands.  When this vulnerability was identified, the usual first argument was raised: "No hacker is going to guess the page names and paths."   This is pretty common and usually because we don’t think of internal malicious users, or authorized individuals inadvertently sharing this information on forums.  Lets not forget DirBuster or other file brute force tools that are available.  Remember, just because you think the naming convention is clever, it very well can be found.

The group understood the issue and a developer was tasked to resolve the issue.  Great.. We are getting this fixed, and it was a high priority.   The problem….  There was no consultation with the application security guy (me at the time) as to the proposed solution.  I don’t have all the answers, and anyone that says they do are foolish.  However, it is a good idea to discuss with an application security expert when it comes to a large scale remediation to such a vulnerability and here is why.

The developer decided that adding a check to the Page_Init method to check the current user’s role was a good idea.  At this point, that is a great idea.  Looking deeper at the code, the developer only checked the authorization on the initial page request.  In .Net, that would look something like this:

protected void Page_Init(object sender, EventArgs e)
{
    if (!Page.IsPostBack)
    {
        //Check the user authorization on initial load
        if (!Context.User.IsInRole("Admin"))
        {
            Response.Redirect("Default.aspx", true);
        }
    }
}

What happens if the user tricks the page into thinking it is a postback on the initial request?  Depending on the system configuration, this can be pretty simple.  By default, a little more difficult due to EventValidation being enabled.  Unfortunately, this application didn’t use EventValidation. 

There are two ways to tell the request that it is a postback:

  1. Include the __EVENTTARGET parameter.
  2. Include the __VIEWSTATE parameter.

So lets say we have an admin page that looks like the above code snippet, checking for admins and redirecting if not found.  By accessing this page like so would bypass the check for admin and display the page:

http://localhost:49607/Admin.aspx?__EVENTTARGET=

This is an easy oversight to make, because it requires a thorough understanding of how the .Net framework determines postback requests.  It gives us a false sense of security because it only takes one user to know these details to then determine how to bypass the check. 

Lets be clear here, Although this is possible, there are a lot of factors that tie into if this may or may not work.  For example, I have seen many pages that it was possible to do this, but all of the data was loaded on INITIAL page load.  For example, the code may have looked like this:

protected void Page_Load(object sender, EventArgs e)
{
    if (!Page.IsPostBack)
    {
        LoadDropDownLists();
        LoadDefaultData();
    }
}

In this situation, you may be able to get to the page, but not do anything because the initial data needed hasn’t been loaded.  In addition, EventValidation may cause a problem.  This can happen because if you attempt a blank ViewState value it could catch that and throw an exception.  In .Net 4.0+, even if EventValidation is disabled, ViewStateUserKey use can also block this attempt. 

As a developer, it is important to understand how this feature works so we don’t make this simple mistake.  It is not much more difficult to change that logic to test the users authorization on every request, rather than just on initial page load.

As a penetration tester, we should be testing this during the assessment to verify that a simple mistake like this has not been implemented and overlooked.

This is another post that shows the importance of a good security configuration for .Net and a solid understanding of how the framework actually works.  In .Net 2.0+ EventValidation and ViewStateMac are enabled by default.  In Visual Studio 2012, the default Web Form application template also adds an implementation of the ViewStateUserKey.  Code Safe everyone.

ViewState XSS: What’s the Deal?

September 17, 2012 by · Comments Off on ViewState XSS: What’s the Deal?
Filed under: Development, Security, Testing 

Many of my posts have discussed some of the protections that ASP.Net provides by default.  For example, Event Validation, ViewStateMac, and ViewStateUserKey.  So what happens when we are not using these protections?  Each of these have a different effect on what is possible from an attacker’s stand point so it is important to understand what these features do for us.  Many of these are covered in prior posts.  I often get asked the question “What can happen if the ViewState is not properly protected?”  This can be a difficult question because it depends on how it is not protected, and also how it is used.  One thing that can possibly be exploited is Cross-site Scripting (XSS).  This post will not dive into what XSS is, as there are many other resources that do that.  Instead, I will show how an attacker could take advantage of reflective XSS by using unprotected ViewState.

For this example, I am going to use the most basic of login forms.  The form doesn’t even actually work, but it is functional enough to demonstrate how this vulnerability could be exploited.  The form contains a user name and password textboxes, a login button, and an asp.net label control that displays copyright information.  Although probably not very obvious, our attack vector here is going to be the copyright label.

Why the Label?

You may be wondering why we are going after the label here.  The biggest reason is that the developers have probably overlooked output encoding on what would normally be pretty static text.  Copyrights do not change that often, and they are usually loaded in the initial page load.  All post-backs will then just re-populate the data from the ViewState.  That is our entry. Here is a quick look at what the page code looks like:

 1: <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent">
 2:     <span>UserName:</span><asp:TextBox ID="txtUserName" runat="server" />
 3:     <br />
 4:     <span>Password:</span><asp:TextBox ID="txtPassword" runat="server" TextMode="Password" />
 5:     <br />
 6:     <asp:Button ID="cmdSubmit" runat="server" Text="Login" /><br />
 7:     <asp:Label ID="lblCopy" runat="server" />
 8: </asp:Content>

We can see on line 7 that we have the label control for the copyright data.   Here is the code behind for the page:

 1: protected void Page_Load(object sender, EventArgs e)
 2: {
 3:     if (!Page.IsPostBack)
 4:     {
 5:         lblCopy.Text = "Copy 2012 Test Company";
 6:     }
 7: }

Here you can see that only on initial page load, we set the copy text.  On Postback, this value is set from the ViewState.

The Attack

Now that we have an idea of what the code looks like, lets take a look at how we can take advantage of this.  Keep in mind there are many factors that go into this working so it will not work on all systems.

I am going to use Fiddler to do the attack for this example.  In most of my posts, I usually use Burp Suite, but there is a cool ViewState Decoder that is available for Fiddler that I want to use here.  The following screen shows the login form on the initial load:

I will set up Fiddler to break before requests so I can intercept the traffic.  When I click the login button, fiddler will intercept the request and wait for me to fiddle with the traffic.  The next screen shows the traffic intercepted.  Note that I have underlined the copy text in the view state decoder.  This is where we are going to make our change.

The attack will load in a simple alert box to demonstrate the presence of XSS.  To load this in the ViewState Decoder’s XML format, I am going to encode the attack using HTML Entities.  I used the encoder at http://ha.ckers.org/xss.html to perform the encoding.  The following screen shows the data encoded in the encoder:

I need to copy this text from the encoder and paste it into the copy right field in the ViewState decoder window.  The following image shows this being done:

Now I need to click the “Encode” button for the ViewState.  This will automatically update the ViewState field for this request.   Once I do that, I can “Resume” my request and let it complete.   When the request completes, I will see the login page reload, but this time it will pop up an alert box as shown in the next screen:

This shows that I was able to perform an XSS attack by manipulating a ViewState parameter.  And as I mentioned earlier, this is reflected since it is being reflected from the ViewState.  Win for the Attacker.

So What, I Can Attack Myself

Often times, when I talk about this technique, the first response is that the attacker could only run XSS against themselves since this is in the ViewState.  How can we get that to our victim.  The good news for the attacker…. .Net is going to help us attack our victims here.  Without going into the details, the premise is that .Net will read the ViewState value from the GET or POST depending on the request method.  So if we send a GET it will read it from the querystring.   So if we make the following request to the page, it will pull the ViewState values from the QueryString and execute the XSS just like the first time we ran it:

http://localhost:51301/Default.aspx?__VIEWSTATE=%2fwEPDwU
KLTE0NzExNjI2OA9kFgJmD2QWAgIDD2QWAgIFD2QWAgIHDw8WAh4
EVGV4dAUlQ29weTxzY3JpcHQ%2bYWxlcnQoOSk7PC9zY3JpcHQ%2b
Q29tcGFueWRkZA%3d%3d&ctl00%24MainContent%24txtUserName=
&ctl00%24MainContent%24txtPassword=
&ctl00%24MainContent%24cmdSubmit=Login

Since we can put this into a GET request, it is easier to send this out in phishing emails or other payloads to get a victim to execute the code.  Yes, as a POST, we can get a victim to run this as well, but we are open to so much more when it is a GET request since we don’t have to try and submit a form for this to work.

How to Fix It

Developers can fix this issue quite easily.  They need to encode the output for starters.  For the encoding to work, however, you should set the value yourself on postback too.  So instead of just setting that hard-coded value on initial page load, think about setting it every time.  Otherwise the encoding will not solve the problem.  Additionally, enable the built in functions like ViewStateMac, which will help prevent an attacker from tampering with the ViewState, or consider encrypting the ViewState.

Final Thoughts

This is a commonly overlooked area of security for .Net developers because there are many assumptions and mis-understandings about how ViewState works in this scenario.  The complexity of configuration doesn’t help either.  Many times developers think that  since it is a hard-coded value.. it can’t be manipulated.   We just saw that under the right circumstances, it very well can be manipulated.

As testers, we need to look for this type of vulnerability and understand it so we can help the developers understand the capabilities of it and how to resolve it.  As developers, we need to understand our development language and its features so we don’t overlook these issues.  We are all in this together to help decrease the vulnerabilities available in the applications we use.

Updated [11/12/2012]: Uploaded a video demonstrating this concept.

Another Request Validation Bypass?

August 29, 2012 by · Comments Off on Another Request Validation Bypass?
Filed under: Development, Security 

I stumbled across this BugTraq(http://www.securityfocus.com/archive/1/524043) on Security Focus today that indicates another way to bypass ASP.Net’s built in Request Validation feature. It was reported by Zamir Paltiel from Seeker Research Center showing us how using a % symbol in the tag name (ex. <%tag>) makes it possible to bypass Request Validation and apparently some versions of Internet Explorer will actually parse that as a valid tag. Unfortunately, I do not have the specifics of which versions of IE will parse this. My feeble attempts in IE8 and IE9 did not succeed (and yes, I turned off the XSS filter). I did a previous post back in July of 2011 (which you can read here: http://www.jardinesoftware.net/2011/07/17/bypassing-validaterequest/) which discussed using Unicode-wide characters to bypass request validation. 

I am not going to go into all the details of the BugTraq, please read the provided link as Zamir has done a great write up for it.  Instead, I would like to talk about the proper way of dealing with this issue.  Sure, .Net provides some great built in features, Event Validation, Request Validation, ViewStateMac, etc., but they are just helpers to our overall security cause.  If finding out that there is a new way to bypass Request validation opens your application up to Cross-Site Scripting…  YOU ARE DOING THINGS WRONG!!!  Request Validation is so limited in its own nature that, although it is a nice-to-have, it is not going to stop XSS in your site.  We have talked about this time and time again.  Input Validation, Output Encoding.    Say it with me:  Input Validation, Output Encoding.  Lets briefly discuss what we mean here (especially you.. the dev whose website is now vulnerable to XSS because you relied solely on Request Validation).

Input Validation

There are many things we can do with input validation, but lets not get too crazy here.  Here are some common things we need to think about when doing input validation:

  • What TYPE of data are we receiving.  If you expect an Integer, then make sure that the value casts to an Integer.  If you expect a date-time, then make sure it casts to a date time. 
  • How long should the data be.  If you only want to allow 2 characters (state abbreviation?) then only allow 2 characters.
  • Validate the data is in a specific range.  If you have a numeric field, say an age, then validate that it is greater than 0 and less than 150, or whatever your business logic requires.
  • Define a whitelist of characters allowed.  This is big, especially for free-form text.   If you only allow letters and numbers then only allow letters and numbers.  This can be difficult to define depending on your business requirements.  The more you add the better off you will be.

Output Encoding

I know a lot of people will disagree with me on this, but I personally believe that this is where XSS is getting resolved.  Sure, we can do strict input validation on the front end.  But what if data gets in our application some other way, say a rogue DBA or script running directly against the server.   I will not get into all my feelings toward this, but know that I am all for implementing Input Validation.  Now, on to Output Encoding.  The purpose is to let the client or browser be able to distinguish commands from data.  This is done by encoding command characters that the parser understands.  For Example, for HTML the < character gets replaced with &lt;.  This tells the parser to display the less than character rather than interpret it as the start of a tag definition. 

Even with all the input validation in the world, we must be encoding our output to protect against cross site scripting.  It is pretty simple to do, although you do have to know what needs encoding and what does not.  .Net itself makes this somewhat difficult since some controls auto encode data and others do not.

Can We Fix It?

With little hope that this will get resolved within the .Net framework itself, there are ways that we can update Request Validation ourselves, if you are using 4.0 or higher.  Request Validation is Extensible, so it is possible to create your own class to add this check into Request Validation.  I have included a simple PROOF OF CONCEPT!!! of attempting to detect this.  THIS CODE IS NOT PRODUCTION READY and is ONLY FOR DEMONSTRATION PURPOSES..  USE AT YOUR OWN RISK!.  ok enough of that.  Below is code that will crudely look for the presence of the combination of <% in the request parameter.  There are better ways of doing this, I just wanted to show that this can be done.  Keep in mind, that if your application allows the submission of this combination of characters, this would probably not be a good solution.   Alright..  the code:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Util;

namespace WebTester
{
    public class NewRequestValidation : RequestValidator
    {
        public NewRequestValidation() { }

        protected override bool IsValidRequestString(
            HttpContext context, 
            string value, 
            RequestValidationSource requestValidationSource, 
            string collectionKey, 
            out int validationFailureIndex)
        {
            validationFailureIndex = -1;

            // This check is not meant to be production code...
            // This is just an example of how to set up 
            // a Request Validator override.  
            // Please pick a better way of searching for this.
            if (value.Contains("<%")) // Look for <%...  
            {
                return false;
            }
            else // Let the default Request Validtion take a look.
            {
                return base.IsValidRequestString(
                    context, 
                    value, 
                    requestValidationSource, 
                    collectionKey, 
                    out validationFailureIndex);
            }
        }
    }
}

In order to get a custom request validation override to work in your application, you must tell the application to use it in the web.config.  Below is my sample web.config file updated for this:

  <system.web>
    <httpRuntime requestValidationType="WebTester.NewRequestValidation"/>
  </system.web>

 

Conclusion

We, as developers, must stop relying on built in items and frameworks to protect us against security vulnerabilities.  We must take charge and start practicing secure coding principles so that when a bug like this one is discovered, we can shrug it off and say that we are not worried because we are properly protecting our site. 

Hopefully this will not affect many people, but I can assure you that you will start seeing this being tested during authorized penetration tests and criminals.  The good news, if you are protected, is that it is going to waste a little bit of the attacker’s time by trying this out.  The bad news, and you know who I am talking about, is that your site may now be vulnerable to XSS.  BTW, if you were relying solely on Request Validation to protect your site against XSS, you are probably already vulnerable. 

Request Method Can Matter

August 15, 2012 by · Comments Off on Request Method Can Matter
Filed under: Development, Security 

One of the nice features of ASP.Net is that many of the server controls populate their values based upon the request method.  Lets look at a quick example.   If the developer has created a text box on the web form, called txtUserName, then on a post back the Text property will be populated from the proper request collection based on the request method.  So if the form was sent via a GET request, then txtUserName.Text is populated from Request.QueryString[“txtUserName”].  If the form was sent via a POST request, then txtUserName.Text is populated from Request.Form[“txtUserName”].  I know, master pages and other nuances may change the actual name of the client id, but hopefully you get the point. 

GET REQUEST

txtUserName.Text = Request.QueryString[“txtUserName”]

POST REQUEST

txtUserName.Text = Request.Form[“txtUserName”]

Although this is very convenient for the developer, there are some concerns on certain functionality that should be considered. Think about a login form.  One of the security rules we always want to follow is to never send sensitive information in the URL.  With that in mind, the application should not allow the login form to be submitted using a GET request because the user name and password would be passed via the query string.  By default, most login forms will accept both GET and POST requests because of how the framework and server controls work.  Why would someone use a get request?  Automation?  Easy login, think for example if I craft the proper URL and bookmark the login page so it auto logs me in to the site.  Although not very common we, as developers, have to protect our users from abusing this type of flaw.  In no way am I saying not to use the server controls.. they are great controls. The point is to be aware of the pitfalls in some situations.

The good news!!  We just need to check the request method and only accept POST requests.  If we receive a GET request, just reject it.  Yes, a user can still submit the GET request, but it won’t authenticate them and defeats the purpose of the user.  Lets take a moment to look at some code that uses the default functionality.

Below is a VERY simple method that demonstrates how both GET and POST requests act.  Although it is not anything more than Response.Write calls, it is sufficient to demonstrate the point.

protected void Page_Load(object sender, EventArgs e)
{
       if (Page.IsPostBack)
       {
            // Encode the Value from the TextBox
            Response.Write(HttpUtility.HtmlEncode(txtUserName.Text));
            Response.Write(":");
            // Encode the value from the TextBox
            Response.Write(HttpUtility.HtmlEncode(txtPassword.Text));
        }
}

Here is a POST Request:

POST http://localhost:60452/Default.aspx HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Referer: http://localhost:60452/Default.aspx
Accept-Language: en-US
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)
Content-Type: application/x-www-form-urlencoded
Accept-Encoding: gzip, deflate
Host: localhost:60452
Content-Length: 316
Connection: Keep-Alive
Pragma: no-cache
Cookie: .ASPXANONYMOUS=YQOZB3SHzQEkAAAANDE
4NDczYzctZjVmNi00ZWEzLWJkYTMtZDZjYmZhN
zY0Y2MylOoTQwCvDTUzA6LJuOO33witVabGXV
4RoXUeyg52RDY1; 
ASP.NET_SessionId=jkne3su2dwemw0lu1kqbjrdq

__VIEWSTATE=%2FwEPDwUJNjk1NzE5OTc4ZGRjhRm%2FtVkqPqFadKC
IAA0lQHoBH0FsR4xVM%2FiTGn7Sew%3D%3D&__EVENTVALIDATION=%2
FwEWBALk1bPACQKMrfuqAgKOsPfSCQKP5u6DCONM%2B3R6i2D%2
FIRsWvIhZp5wqldnzoa%2BjoUVRrng5kifu&ctl00%24
MainContent%24txtUserName=jjardine&ctl00%24MainContent%24
txtPassword=password&ctl00%24MainContent%24cmdSubmit=

The application will write out “jjardine:password” to the browser.  This makes sense since those two values were passed.  Now lets take a look at a GET Request and see the same request:

http://localhost:60452/Default.aspx?__VIEWSTATE=%2FwEPDwUJNjk1N
zE5OTc4ZGRjhRm%2FtVkqPqFadKCIAA0lQHoBH0FsR4xVM%2FiTGn7Sew
%3D%3D&__EVENTVALIDATION=%2FwEWBALk1bPACQKMrfuqAgKOsP
fSCQKP5u6DCONM%2B3R6i2D%2FIRsWvIhZp5wqldnzoa%2BjoUVRrng
5kifu&ctl00%24MainContent%24txtUserName=jjardine
&ctl00%24MainContent%24txtPassword=password&
ctl00%24MainContent%24cmdSubmit=

Again, this output will write out “jjardine:password” to the browser.  The big difference here is that we are sending sensitive information in the URL which, as mentioned above, is a big no-no.  We can’t stop a user from submitting a request like this.  However, we can decide to not process it which should be enough to get someone to stop doing it. 

It is important to note that any form that has sensitive information and uses the server controls like this can be vulnerable to this issue.  There are some mitigations that can be put in place. 

Check the Request Method

It is very easy to check the request method before processing an event.  The below code shows how to implement this feature:

protected void Page_Load(object sender, EventArgs e)
{
    if (Page.IsPostBack)
    {
        if (Request.RequestType == "POST")
        {
            // Encode the Value from the TextBox
            Response.Write(HttpUtility.HtmlEncode(txtUserName.Text));
            Response.Write(":");
            // Encode the value from the TextBox
            Response.Write(HttpUtility.HtmlEncode(txtPassword.Text));
        }
    }
}

Now, if the request is not a POST, it will not process this functionality.  Again, this is a very simplistic example.

Implement CSRF Protection

Implementing CSRF protection is beyond the scope of this post, but the idea is that there is something unique about this request per the user session.  As we saw in the GET request example, there are more parameters than just the user name and password.  However, in that example, these fields are all static.  There is no randomness.  CSRF protection adds randomness to the request so even if the user was able to send a get request, their next session attempt would no longer work because of this missing random value.

Large Data

So large data doesn’t sound like a lot here, but it is a mitigation based on the construction of the page.  If the viewstate and other parameters become really long, then they will be too large to put in the URL (remember this is usually limited on length).  If that is the case, the user will not be able to send all the parameters required and will be blocked.  This is usually not the case on login pages as there is usually very little data that is sent.  The viewstate is usually not that big so make sure you are aware of those limits.

 

Why do we care?

Although this may not really seem like that big of an issue, it does pose a risk to an application.  Due to compliance reasons related to storing sensitive information in log files, doing our part in protecting users data (especially authentication data), and the fact that this WILL show up on penetration testing reports, this is something that should be investigated.  As you can see, it is not difficult to resolve this issue, especially for the login screen. 

In addition to the login screen, if other forms are set up to support both GETS and POSTS, it could make CSRF attacks easier as well.  Although we can do a CSRF attack with a POST request, a GET can be deployed in more ways.  This risk is often overlooked, but is an easy win for developers to implement.   Happy Coding!!

ViewStateMAC: Seriously, Enable It!

February 1, 2012 by · Comments Off on ViewStateMAC: Seriously, Enable It!
Filed under: Development, Security 

I have been doing a lot of research lately around event validation and view state.  I have always been interested in how Event Validation worked under the covers and if it could be tampered with.  I will attempt to explain that it is, in fact, possible to tamper with the Event Validation field in a similar manner that view state can be tampered.  I know, the title of the post reads “ViewStateMAC”, don’t worry I will get to that.  But first, it is important to discuss a little about how Event Validation works to understand why ViewStateMAC is important.

__EVENTVALIDATION – Basics

Event Validation is a feature that is built into ASP.Net web forms.  It is enabled by default and it serves the purpose to ensure that only valid data is received for  controls that register valid events or data.  As a quick example, think for a moment about a drop down list.  Each value that is programmatically added to the control will be registered with Event Validation.  Code Example 1 demonstrates loading values into a drop down list (data binding is also very common).  Each of these values will now exist in the Event Validation feature.  When a user attempts to submit a form, or post back, the application will verify the value submitted for that control is a valid value.  If it is not, a not so pretty exception is generated.

Example 1
private void FillDDL2()
{
  ddlList.Items.Add(new ListItem("1", "1"));
  ddlList.Items.Add(new ListItem("2", "2"));
  ddlList.Items.Add(new ListItem("3", "3"));
  ddlList.Items.Add(new ListItem("4", "4"));
  ddlList.Items.Add(new ListItem("5", "5"));
}

__EVENTVALIDATION – Hash

Event Validation is primarily based on storing a hash value for each of the values it needs to check for validity.  More specifically, there is a specific routine that is run to create an integer based hash from the control’s unique id and the specified value (the routine is beyond the scope of this post).  Every value that gets stored in Event Validation has a corresponding integer hash value.  These hash values are stored in an array list which gets serialized into the string that we see in the __EVENTVALIDATION hidden form field on the web page. 

__EVENTVALIDATION – Page Response

When a page is requested by the user it goes through an entire lifecycle of events.  To understand how Event Validation really works, lets first take a look at how the field is generated.  Before the page is rendered, each control or event that is registered for event validation will have its value hashed (see previous section) and added to the array list.  As mentioned before, this can be values for a list control, valid events for the page or controls (for example, button click events), and even View State.  This array is serialized and stored in the __EVENTVALIDATION hidden field.

__EVENTVALIDATION – Post Back Request

The request is where Event Validation takes action.  When a user submits a post back to the server, event validation will validate the values that have been registered.  So in the Drop Down List example in "Example 1” Event Validation is there to make sure that only the values 1-5 are submitted for ddlList.  It does this by taking the value that was sent (Request.Form[“ddlList”]) and re-generates the numeric hash.  The hash is then compared to the list and if it exists, the value is allowed.  If it doesn’t exist in the de-serialized Event Validation list, then an exception is thrown and the page cannot continue processing.

__EVENTVALIDATION – Manipulation

De-serializing the Event Validation value is pretty easy.  This is very similar to how it is done for View State.  After writing my own tool to tamper with the Event Validation, I found the ViewState Viewer (http://labs.neohapsis.com/2009/08/03/viewstateviewer-a-gui-tool-for-deserializingreserializing-viewstate/) plug-in for Fiddler.  The ViewState Viewer plugs right into Fiddler with no issues.  Don’t let the name mislead you, this tool works great with the Event Validation data as well.  When you paste in your Event Validation string and click the “Decode” button, it generates a nice XML snippet of the contents.  The screen shot below demonstrates the Event Validation value and its decoded value from a test page I created.

Once you have the information decoded, you can now add in your own integers to the System.Collections.ArrayList.  Look closely and you might see that the last integer –439972587 is not aligned with the rest of the items.  This is because that is a custom value that I added to the event validation.  These numbers don’t look like they really mean anything, but to Event Validation, they mean everything.  If we can determine how to create our own numbers, we can manipulate what the server will see as valid data.  Once you have made your modifications, click the “Encode” button and the value in the top box will refresh with the new __EVENTVALIDATION value.  It may be possible to attempt brute forcing the data you want (if you can’t create the exact hash code) by just padding a bunch of integers into the list and submitting your modified data.  This is definitely hit or miss, could be time consuming, and would probably generate a lot of errors for someone to notice.  We are monitoring our error logs right?

__EVENTVALIDATION – Thoughts

Maybe just me, but I always thought that if I had event validation enabled, as a developer, I didn’t have to validate the data from drop down lists that was submitted.  I thought that this type of data was protected because event validation enforced these constraints.  This is obviously not the case and honestly brings up a very important topic for another day; “We shouldn’t be relying solely on configuration for protection”.  Although this post uses the drop down lists as an example, this has a much greater effect.   What about buttons that are made visible based on your role.  If event validation can be tampered with, now those buttons that didn’t exist, can have their events added to the acceptable list.  If you are not checking that the user has the proper role within that event, you may be in big trouble. 

So what types of attacks are possible?

  • Parameter tampering
  • Authorization bypass
  • Cross Site Scripting
  • Maybe More…

ViewStateMAC – Finally!!

Ok, it is not all bad news and finally we come to our friend (or worst enemy if it is disabled) ViewStateMAC.  Keep in mind that ViewStateMAC is enabled by default, so if this is disabled, it was done explicitly.  ViewStateMAC adds a message authentication code to ViewState obviously, judging by its name.   Basically, it adds a hash to the view state so that an attacker cannot tamper with its data.  So if we go back to the drop down list, if a developer uses code like Example 2 to access the selected item, then you have to be able to tamper the view state, otherwise the first item in the original item will get selected.  But if you use code like Example 3 to access that data, you could add your value to the EventValidation and get it accepted by the application.   Or can you?

Example 2
protected void cmdSubmit_Click(object sender, EventArgs e)
{
  Response.Write(ddlList.SelectedItem.Value);
}

Example 3
protected void cmdSubmit_Click(object sender, EventArgs e)
{
  Response.Write(Request.Form["ddlList"].ToString());
}

Actually, ViewStateMAC does more than sign ViewState, it also signs the Event Validation value.  I was not able to identify any documentation on MSDN that indicates this feature, but apparently __PREVIOUSPAGE may get signed with this as well.  I have run extensive tests and can confirm that ViewStateMAC is critical to signing Event Validation.  So in general, If ViewStateMAC is enabled, it protects both ViewState and Event Validation from being tampered with.  That is pretty important and I am not sure why it is not listed on MSDN.  Unfortunately, disable it and it creates a much greater security risk than initially thought because it effects more than ViewState. 

ViewStateMAC – Not a Replacement for Input Validation

In no way should a developer rely solely on ViewStateMAC, Event Validation and ViewState as their means of input validation.  The application should be developed as if these features do not even exist.  Drop down lists should be validated that only valid values were submitted.  Control events should be verified that they are allowed.   Why not use these features?  I am not saying that you should not use these features, but they should be in addition to your own input validation.  What if ViewStateMAC were to get disabled during testing, or for some other unknown reason?  Even if Event Validation is still enabled, it is not looking good.  Unless you have ViewState being encrypted, which would help block tampering with the view state, an attacker could still manipulate the event validation code. 

Conclusion

The details provided here have been high level with not much actual detail in actually manipulating the Event Validation field.  The post would be too long to include all the details here.  Hopefully there is enough information to make developer’s aware of the severity of ViewStateMAC and how Event Validation actually works.  From a penetration tester’s view, if you visit a .net webform application with ViewStateMAC disabled, this should be researched more to help accurately identify risk for the application.  Devs, please, please, please do not disable this feature.  If you are on a web farm, make sure the same machine key is set for each machine and this should be supported. Remember, these features are helpers, but not complete solutions.   You are responsible for performing proper input validation to ensure that the data you expect is what you accept.

The information provided is for informational and educational purposes only.  The information is provided as-is with no claim to be error free.  Use this information at your own risk.

« Previous Page