SQL Injection: Calling Stored Procedures Dynamically

October 26, 2016 by · Comments Off on SQL Injection: Calling Stored Procedures Dynamically
Filed under: Development, Security, Testing 

It is not news that SQL Injection is possible within a stored procedure. There have been plenty of articles discussing this issues. However, there is a unique way that some developers execute their stored procedures that make them vulnerable to SQL Injection, even when the stored procedure itself is actually safe.

Look at the example below. The code is using a stored procedure, but it is calling the stored procedure using a dynamic statement.

	conn.Open();
        var cmdText = "exec spGetData '" + txtSearch.Text + "'";
        SqlDataAdapter adapter = new SqlDataAdapter(cmdText, conn);
        DataSet ds = new DataSet();
        adapter.Fill(ds);
        conn.Close();
        grdResults.DataSource = ds.Tables[0];
        grdResults.DataBind();

It doesn’t really matter what is in the stored procedure for this particular example. This is because the stored procedure is not where the injection is going to occur. Instead, the injection occurs when the EXEC statement is concatenated together. The email parameter is being dynamically added in, which we know is bad.

This can be quickly tested by just inserting a single quote (‘) into the search field and viewing the error message returned. It would look something like this:

System.Data.SqlClient.SqlException (0x80131904): Unclosed quotation mark after the character string ”’. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) at System.Data.SqlClient.SqlDataReader.TryConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at

With a little more probing, it is possible to get more information leading us to understand how this SQL is constructed. For example, by placing ‘,’ into the search field, we see a different error message:

System.Data.SqlClient.SqlException (0x80131904): Procedure or function spGetData has too many arguments specified. at System.Data.SqlClient.SqlConnection.

The mention of the stored procedure having too many arguments helps identify this technique for calling stored procedures.

With SQL we have the ability to execute more than one query in a given transaction. In this case, we just need to break out of the current exec statement and add our own statement. Remember, this doesn’t effect the execution of the spGetData stored procedure. We are looking at the ability to add new statements to the request.

Lets assume we search for this:

james@test.com’;SELECT * FROM tblUsers–

this would change our cmdText to look like:

exec spGetData’james@test.com’;SELECT * FROM tblUsers–‘

The above query will execute the spGetData stored procedure and then execute the following SELECT statement, ultimately returning 2 result sets. In many cases, this is not that useful for an attacker because the second table would not be returned to the user. However, this doesn’t mean that this makes an attack impossible. Instead, this turns our attacks more towards what we can Do, not what can we receive.

At this point, we are able to execute any commands against the SQL Server that the user has permission too. This could mean executing other stored procedures, dropping or modifying tables, adding records to a table, or even more advanced attacks such as manipulating the underlying operating system. An example might be to do something like this:

james@test.com’;DROP TABLE tblUsers–

If the user has permissions, the server would drop tblUsers, causing a lot of problems.

When calling stored procedures, it should be done using command parameters, rather than dynamically. The following is an example of using proper parameters:

    conn.Open();
    SqlCommand cmd = new SqlCommand();
    cmd.CommandText = "spGetData";
    cmd.CommandType = CommandType.StoredProcedure;
    cmd.Connection = conn;
    cmd.Parameters.AddWithValue("@someData", txtSearch.Text);
    SqlDataAdapter adapter = new SqlDataAdapter(cmd);
    DataSet ds = new DataSet();
    adapter.Fill(ds);
    conn.Close();
    grdResults.DataSource = ds.Tables[0];
    grdResults.DataBind();

The code above adds parameters to the command object, removing the ability to inject into the dynamic code.

It is easy to think that because it is a stored procedure, and the stored procedure may be safe, that we are secure. Unfortunately, simple mistakes like this can lead to a vulnerability. Make sure that you are properly making database calls using parameterized queries. Don’t use dynamic SQL, even if it is to call a stored procedure.

Potentially Dangerous Request.Path Value was Detected…

November 4, 2015 by · Comments Off on Potentially Dangerous Request.Path Value was Detected…
Filed under: Development, Security 

I have discussed request validation many times when we see the potentially dangerous input error message when viewing a web page. Another interesting protection in ASP.Net is the built-in, on by default, Request.Path validation that occurs. Have you ever seen the error below when using or testing your application?

Pdc1

The screen above occurred because I placed the (*) character in the URL. In ASP.Net, there is a default set of defined illegal characters in the URL. This list is defined by RequestPathInvalidCharacters and can be configured in the web.config file. By default, the following characters are blocked:

  • <
  • >
  • *
  • %
  • &
  • :
  • \\

It is important to note that these characters are blocked from being included in the URL, this does not include the protocol specification or the query string. That should be obvious since the query string uses the & character to separate parameters.

There are not many cases where your URL needs to use any of these default characters, but if there is a need to allow a specific character, you can override the default list. The override is done in the web.config file. The below snippet shows setting the new values (removing the < and > characters:

	<httpruntime requestPathInvalidCharacters="*,%,&,:,\\"/>

Be aware that due the the web.config file being an xml file, you need to escape the < and > characters and set them as &lt; and &gt; respectively.

Remember that modifying the default security settings can expose your application to a greater security risk. Make sure you understand the risk of making these modifications before you perform them. It should be a rare occurrence to require a change to this default list. Understanding the platform is critical to understanding what is and is not being protected by default.

ASP.Net Insufficient Session Timeout

October 6, 2015 by · Comments Off on ASP.Net Insufficient Session Timeout
Filed under: Development, Security, Testing 

A common security concern found in ASP.Net applications is Insufficient Session Timeout. In this article, the focus is not on the ASP.Net session that is not effectively terminated, but rather the forms authentication cookie that is still valid after logout.

How to Test

  • User is currently logged into the application.
  • User captures the ASPAuth cookie (name may be different in different applications).
    • Cookie can be captured using a browser plugin or a proxy used for request interception.
  • User saves the captured cookie for later use.
  • User logs out of the application.
  • User requests a page on the application, passing the previously captured authentication cookie.
  • The page is processed and access is granted.

Typical Logout Options

  • The application calls FormsAuthentication.Signout()
  • The application sets the Cookie.Expires property to a previous DateTime.

Cookie Still Works!!

Following the user process above, the cookie still provides access to the application as if the logout never occurred. So what is the deal? The key is that unlike a true “session” which is maintained on the server, the forms authentication cookie is self contained. It does not have a server side component to stay in sync with. Among other things, the authentication cookie has your username or ID, possibly roles, and an expiration date. When the cookie is received by the server it will be decrypted (please tell me you are using protection = all) and the data extracted. If the cookie’s internal expiration date has not passed, the cookie is accepted and processed as a valid cookie.

So what did FormsAuthentation.Signout() do?

If you look under the hood of the .Net framework, it has been a few years but I doubt much has changed, you will see that FormsAuthentication.Signout() really just removes the cookie from the browser. There is no code to perform any server function, it merely asks the browser to remove it by clearing the value and back-dating the expires property. While this does work to remove the cookie from the browser, it doesn’t have any effect on a copy of the original cookie you may have captured. The only sure way to really make the cookie inactive (before the internal timeout occurs) would be to change your machine key in the web.config file. This is not a reasonable solution.

Possible Mitigations

You should be protecting your cookie by setting the httpOnly and Secure properties. HttpOnly tells the browser not to allow javascript to have access to the cookie value. This is an important step to protect the cookie from theft via cross-site scripting. The secure flag tells the browser to only send the authentication cookie over HTTPS, making it much more difficult for an attacker to intercept the cookie as it is sent to the server.

Set a short timeout (15 minutes) on the cookie to decrease the window an attacker has to obtain the cookie.

You could attempt to build a tracking system to manage the authentication cookie on the server to disable it before its time has expired. Maybe something for another post.

Understand how the application is used to determine how risky this issue may be. If the application is not used on shared/public systems and the cookie is protected as mentioned above, the attack surface is significantly decreased.

Final Thoughts

If you are facing this type of finding and it is a forms authentication cookie issue, not the Asp.Net session cookie, take the time to understand the risk. Make sure you understand the settings you have and the priority and sensitivity of the application to properly understand “your” risk level. Don’t rely on third party risk ratings to determine how serious the flaw is. In many situations, this may be a low priority, however in the right app, this could be a high priority.

F5 BigIP Decode with Fiddler

September 18, 2015 by · Comments Off on F5 BigIP Decode with Fiddler
Filed under: Development, Testing 

There are many tools out there that allow you to decode the F5 BigIP cookie used on some sites. I haven’t seen anything that just plugs into Fiddler if you use that for debugging purposes. One of the reasons you may want to decode the F5 cookie is just that, debugging. If you need to know what server, behind the load balancer, your request is going to to troubleshoot a bug, this is the cookie you need. I won’t go into a long discussion of the F5 Cookie, but you can read more about it I have a description here.

Most of the examples I have seen are using python to do the conversion. I looked for a javascript example, as that is what Fiddler supports in its Custom Rules but couldn’t really find anything. I got to messing around with it and put together a very rough set of functions to be able to decode the cookie value back to its IP address and Port. It sticks the result into a custom column in the Fiddler interface (scroll all the way to the right, the last column). If it identifies the cookie it will attempt to decode it and populate the column. This can be done for both response and request cookies.

To decode response cookies, you need to update the custom Fiddler rules by adding code to the static function OnBeforeResponse(oSession: Session) method. The following code can be inserted at the end of the function:

	var re = /\d+\.\d+\.0{4}/;    // Simple regex to identify the BigIP pattern.
        
        if (oSession.oResponse.headers)
        {
            for (var x:int = 0; x < oSession.oResponse.headers.Count(); x++)
            {
                if(oSession.oResponse.headers[x].Name.Contains("Set-Cookie")){
                    var cookie : Fiddler.HTTPHeaderItem = oSession.oResponse.headers[x];
                    var myArray = re.exec(cookie.Value);
                    if (myArray != null && myArray.length > 0)
                    {
                    
                        for (var i = 0; i < myArray.length; i++)
                        {
                            var index = myArray[i].indexOf(".");
                        
                            var val = myArray[i].substring(0,index);
                        
                            var hIP = parseInt(val).toString(16);
                            if (hIP.length < 8)
                            {
                                var pads = "0";
                                hIP = pads + hIP;
                            }
                            var hIP1 = parseInt(hIP.toString().substring(6,8),16);
                            var hIP2 = parseInt(hIP.toString().substring(4,6),16);
                            var hIP3 = parseInt(hIP.toString().substring(2,4),16);
                            var hIP4 = parseInt(hIP.toString().substring(0,2),16);
                            
                            var val2 = myArray[i].substring(index+1);
                            var index2 = val2.indexOf(".");
                            val2 = val2.substring(0,index2);
                            var hPort = parseInt(val2).toString(16);
                            if (hPort.length < 4)
                            {
                                var padh = "0";
                                hPort = padh + hPort;
                            }
                            var hPortS = hPort.toString().substring(2,4) + hPort.toString().substring(0,2);
                            var hPort1 = parseInt(hPortS,16);
        
                            oSession["ui-customcolumn"] += hIP1 + "." + hIP2 + "." + hIP3 + "." + hIP4 + ":" + hPort1 + "  ";
                       
                        }
                    }
                }
            }
        }

In order to decode the cookie from a request, you need to add the following code to the static function OnBeforeRequest(oSession: Session) method.

        var re = /\d+\.\d+\.0{4}/;      // Simple regex to identify the BigIP pattern.
        oSession["ui-customcolumn"] = "";
        
           
           
        if (oSession.oRequest.headers.Exists("Cookie"))
        {
            var cookie = oSession.oRequest["Cookie"];
            var myArray = re.exec(cookie);
            if (myArray != null && myArray.length > 0)
            {
            
                for (var i = 0; i < myArray.length; i++)
                {
                    var index = myArray[i].indexOf(".");
                
                    var val = myArray[i].substring(0,index);
                
                    var hIP = parseInt(val).toString(16);
                    if (hIP.length < 8)
                    {
                        var pads = "0";
                        hIP = pads + hIP;
                    }
                    var hIP1 = parseInt(hIP.toString().substring(6,8),16);
                    var hIP2 = parseInt(hIP.toString().substring(4,6),16);
                    var hIP3 = parseInt(hIP.toString().substring(2,4),16);
                    var hIP4 = parseInt(hIP.toString().substring(0,2),16);
                    
                    var val2 = myArray[i].substring(index+1);
                    var index2 = val2.indexOf(".");
                    val2 = val2.substring(0,index2);
                    var hPort = parseInt(val2).toString(16);
                    if (hPort.length < 4)
                    {
                        var padh = "0";
                        hPort = padh + hPort;
                    }
                    var hPortS = hPort.toString().substring(2,4) + hPort.toString().substring(0,2);
                    var hPort1 = parseInt(hPortS,16);

                    oSession["ui-customcolumn"] += hIP1 + "." + hIP2 + "." + hIP3 + "." + hIP4 + ":" + hPort1 + "  ";
               
                }
            }
        }

Again, this is a rough compilation of code to perform the tasks. I am well aware there are other ways to do this, but this did seem to work. USE AT YOUR OWN RISK. It is your responsibility to make sure any code you add or use is suitable for your needs. I am not liable for any issues from this code. From my testing, this worked to decode the cookie and didn't present any issues. This is not production code, but an example of how this task could be done.

Just add the code to the custom rules file and visit a site with a F5 cookie and it should decode the value.

A Pen Test is Coming!!

October 18, 2014 by · Comments Off on A Pen Test is Coming!!
Filed under: Development, Security, Testing 

You have been working hard to create the greatest app in the world.  Ok, so maybe it is just a simple business application, but it is still important to you.  You have put countless hours of hard work into creating this master piece.  It looks awesome, and does everything that the business has asked for.  Then you get the email from security: Your application will undergo a penetration test in two weeks.  Your heart skips a beat and sinks a little as you recall everything you have heard about this experience.  Most likely, your immediate action is to go on the defensive.  Why would your application need a penetration test?  Of course it is secure, we do use HTTPS.  No one would attack us, we are small.  Take a breath..  it is going to be alright.

All too often, when I go into a penetration test, the developers start on the defensive.  They don’t really understand why these “other” people have to come in and test their application.  I understand the concerns.   History has shown that many of these engagements are truly considered adversarial.  The testers jump for joy when they find a security flaw.  They tell you how bad the application is and how simple the fix is, leading to you feeling about the size of an ant.  This is often due to a lack of good communication skills. 

Penetration testing is adversarial.  It is an offensive assessment to find security weaknesses in your systems.  This is an attempt to simulate an attacker against your system.  Of course there are many differences, such as scope, timing and rules, but the goal is the same.  Lets see what we can do on your system.  Unfortunately, I find that many testers don’t have the communication skills to relay the information back to the business and developers in a way that is positive.  I can’t tell you how may times I have heard people describe their job as great because they get to come in, tell you how bad you suck and then leave.  If that is your penetration tester, find a new one.  First, that attitude breaks down the communication with the client and doesn’t help promote a secure atmosphere.  We don’t get anywhere by belittling the teams that have worked hard to create their application.  Second, a penetration test should provide solid recommendations to the client on how they can work to resolve the issues identified.  Just listing a bunch of flaws is fairly useless to a company. 

These engagements should be worth everyone’s time.  There should be positive communication between the developers and the testing team.  Remember that many engagements are short lived so the more information you can provide the better the assessment you are going to get.  The engagement should be helpful.  With the right company, you will get a solid assessment and recommendations that you can do something with.  If you don’t get that, time to start looking at another company for testing.  Make sure you are ready for the test.   If the engagement requires an environment to test in, have it all set up.  That includes test data (if needed).   The testers want to hit the ground running.  If credentials are needed, make sure those are available too.  The more help you can be, the more you will benefit from the experience.   

As much as you don’t want to hear it, there is a very high chance the test will find vulnerabilities.  While it would be great if applications didn’t have vulnerabilities, it is fairly rare to find them.  Use this experience to learn and train on security issues. Take the feedback as constructive criticism, not someone attacking you.   Trust me, you want the pen testers to find these flaws before a real attacker does. 

Remember that this is for your benefit.  We as developers also need to stay positive.  The last thing you want to do is challenge the pen testers saying your app is not vulnerable.  The teams that usually do that are the most vulnerable. Say positive and it will be a great learning experience.

Bounties For Fixes

October 11, 2013 by · Comments Off on Bounties For Fixes
Filed under: Security 

It was just recently announced that Google will pay for open-source code security fixes (http://www.computerworld.com/s/article/9243110/Google_to_pay_for_open_source_code_security_fixes). Paying for stuff to happen is nothing new, we have seen Bug Bounty programs popping up in a lot of companies. The idea behind the bug bounty is that people can submit bugs they have found and then possibly get paid for that bug. This has been very successful for some large companies and some bug finders out there.

The difference in this new announcement is that they are paying for people to apply fixes to some open source tools that are widely used. I personally think this is a good thing because it will encourage people to actually start fixing some of the issues that exist. Security is usually bent on finding vulnerabilities, which doesn’t really help fix security at all. It still requires the software developers to implement some sort of change to get that security hole plugged. Here, we see that the push to fix the problem is now being rewarded. This is especially true in open-source projects as many of the people that work on these projects do so voluntarily.

Is there any concern though that this process could be abused? The first thought that comes to mind is people working together where one person plants the bug and the other one fixes it. Not sure how realistic that is, but I am sure there are people thinking about it. What could possibly be more challenging is verifying the fixes. What happens if someone patches something, but they do it incorrectly? Who is testing the fix? How do they verify that it is really fixed properly? If they find later that the fix wasn’t complete, does the fixer have to return the payment? There are always questions to be answered when we look at a new program like this. I am sure that Google has thought about this before rolling it out and I really hope the program works out well. it is a great idea and we need to get more people involved in helping fix some of these issues.

Your Passwords Were Stolen: What’s Your Plan?

May 29, 2013 by · Comments Off on Your Passwords Were Stolen: What’s Your Plan?
Filed under: Development, Security 

If you have been glancing at many news stories this year, you have certainly seen the large number of data breaches that have occurred. Even just today, we are seeing reports that Drupal.org suffered from a breach (https://drupal.org/news/130529SecurityUpdate) that shows unauthorized access to hashed passwords, usernames, and email addresses. Note that this is not a vulnerability in the CMS product, but the actual website for Drupal.org. Unfortunately, Drupal is just the latest to report this issue.

In addition to Drupal, LivingSocial also suffered a huge breach involving passwords. LinkedIn, Evernote, Yahoo, and Name.com have also joined this elite club. In each of these cases, we have seen many different formats for storing the passwords. Some are using plain text (ouch), others are actually doing what has been recommended and using a salted hash. Even with a salted hash, there are still some issues. One, hashes are fast and some hashes are not as strong as others. Bad choices can lead to an immediate failure in implementation and protection.

Going into what format you should store your passwords in will be saved for another post, and has been discusses heavily on the internet. It is really outside the scope of this post, because in this discussion, it is already too late for that. Here, I want to ask the simple question of, “You have been breached, What do you do?”

Ok, Maybe it is not a simple question, or maybe it is. Most of the sites that have seen these breaches are fairly quick to force password resets by all of their users. The idea behind this is that the credentials were stolen, but only the actual user should be able to perform a password reset. The user performs the reset, they have new credentials, and the information that the bad guy got (and everyone else that downloads the stolen credentials) are no good. Or maybe not?? Wait.. you re-use passwords across multiple sites? Well, that makes it more interesting. I guess you now need to reset a bunch of passwords.

Reseting passwords appears to be the standard. I haven’t seen anyone else attempt to do anything else, if you have please share. But what else would work? You can’t just change the algorithm to make it stronger.. the bad guy has the password. Changing the algorithm doesn’t change that fact and they just log in using a stronger algorithm. I guess that won’t work. Might be nice to have a mechanism to upgrade everyone to a stronger algorithm as time goes on though.

So if resetting passwords in mass appears to work and is the standard, do you have a way to do it? if you got breached today, what would you need to do to reset everyone’s password, or at least force a password reset on all users? There are a few options, and of course it depends on how you actually manage user passwords.

If you have a password expiration field in the DB, you could just set all passwords to have expired yesterday. Now everyone will be presented with an expired password prompt. The problem with this solution is if an expired password just requires the old password to set the new password. It is possible the bad guy does this before the actual user. Oops.

You could Just null out or put in a 0 or some false value into all of the password fields. This only works for encrypted or hashed passwords.. not clear text. This could be done with a simple SQL Update statement, just drop that needless where clause ;). When a user goes to log in, they will be unsuccessful because when the password they submit is encrypted or hashed, it will never match the value you updated the field to. This forces them to use the forgot password field.

You could run a separate application that resets everyone password like the previous method, it just doesn’t run a DB Update directly on the server. Maybe you are a control freak as to what gets run against the servers and control that access.

As you can see, there are many ways to do this, but have you really given it any thought? Have you written out your plan that in the event your site gets breached like these other sites, you will be able to react intelligently and swiftly? Often times when we think of incidence response, we think of stopping the attack, but we also have to think about how we would protect our users from future attacks quickly.

These ideas are just a few examples of how you can go about this to help provoke you and your team to think about how you would do this in your situation. Every application is different and this scenario should be in your IR plan. If you have other ways to handle this, please share with everyone.

ViewState: Still Mis-understood

April 22, 2013 by · Comments Off on ViewState: Still Mis-understood
Filed under: Development, Security 

Here we are in 2013 and we are still having discussions about what ViewState is and how it works.  For you MVC guys and gals, you are probably even wondering who is still using it.  Although I do find it interesting that we have ViewState in Webforms but not in MVC even though MVC has the Views.   Does that make anyone else wonder? 

A few weeks ago, I was in the midst of a Twitter discussion where there was some different ideas about ViewState floating around.  It really started when the idea came across to use Encryption for your ViewState to prevent tampering.  While I do agree 100% that encrypting your ViewState will prevent someone from tampering it, I feel that it adds quite a bit of overhead when it shouldn’t be needed.  A general rule of thumb that I use with ViewState is that you shouldn’t be storing anything sensitive in it.  There are other mechanisms for storing sensitive information, and the client is your last, last, last choice (or shouldn’t be a choice at all). 

My response, why not just use ViewStateMac?  The whole point of ViewStateMac is to protect the ViewState from tampering.  It is on by default, and doesn’t require encrypting all of your ViewState and then decrypting it.

For those of you who don’t know, ViewState is a .Net specific client-side storage mechanism.  By default, it is used to store the state of your view.  I know, webforms don’t have views per se..  But that is what it does.  It stores the values for your label controls, the items in your drop down lists, data in your datagrid, etc.  This data is then used on a postback to re-populate the server-side controls.  For example, if there is a label on the form (say copyright), when the form is submitted and re-displayed the text for that label is populated automatically from the ViewState.   This saves the developer from having to reset the value on every response. 

Another example is with a drop down list.  All of the values are stored in the ViewState.   When the user submits the form, the drop down list is re-populated from the ViewState.  This does a few things.  First, it keeps the developer from having to populate the list from the original data source for every response and request.  Of course, there are other ways to save the hit to the original data source and store it in session or some other quick storage.  Second, it is basically a version of a reference map.  When done correctly, it is not possible for an attacker to submit a value that didn’t exist in the drop down list.  This really helps when it comes to parameter tampering on these fields.   Of course there are ways to program this where it doesn’t protect you but that is beyond this post. 

So far, everything we have looked at in ViewState is visible anyway.  Why encrypt it?

I have seen some really bad things stored in ViewState.   I have seen RACF usernames and passwords, full connection strings, and lots of other stuff.  To be blunt, those should never be in there.

So again, if we are not storing this sensitive information in ViewState, why encrypt it?   If the answer is just tamper protection, my opinion is to just go with the ViewStateMac.

When I do an assessment on a web application, one of my favorite things to do is look at the ViewState.  Of course, I love to see when MAC is not enabled.  Most of the time, ViewStateMac is enabled and I don’t get much from the ViewState.  I get confirmation, that the developers aren’t storing sensitive information in there and they are properly protecting it.  When I see that the ViewState is encrypted, it sends off some bells in my head.  Why are they encrypting this ViewState?   Maybe they do have something to hide.  Maybe there is a reason for me to dig into it and see if I can’t break the encryption to see what it is. 

One additional point I would like to make is that ViewState is not meant to be storage across pages.  It is meant to be storage for a single web form. 

I was called out on the Twitter chat as loving ViewState.  I wouldn’t go that far, but I don’t have much ill will toward it.  It has a purpose and can be quite useful if used properly.  Of course, it can be a great help to an attacker if not used properly. 

I enjoyed the discussion on Twitter, unfortunately sometimes 140 characters is not really enough to get ideas across.  I would be curious if anyone has any other thoughts on this topic on how ViewState should work and pros and cons for encrypting ViewState over just setting ViewStateMac.  Feel free to reach out to me.

Hidden Treasures: Not So Hidden

April 5, 2013 by · Comments Off on Hidden Treasures: Not So Hidden
Filed under: Development, Security, Testing 

For years now, I have run into developers that believe that just because a request can’t be seen, it is not vulnerable to flaws.  Wait, what are we talking about here?   What do you mean by a request that can’t be seen?  There are a few different ways that the user would not see a request.  For example, Ajax requests are invisible to users.   What about framed pages?  Except for the main parent, all of the other pages are hidden from the user’s view.  These are what I would call hidden requests. 

Lets use a framed site as an example.  Lets say that a page has multiple frames embedded in it, each leading to a different page of content.  From a user perspective, I would normally only see the URL in the Address Bar of the parent page that contains the other frames.  Although each frame has its own URL, I don’t see it natively.  This type of out-of-site out-of-mind thought process often leads us to over look the security of these pages. 

There are all sorts of vulnerabilities that can be present in the scenario above.  I have seen SQL injection, Direct Object Reference and Authorization issues.  BUT THEY ARE HIDDEN??

To the naked eye, yes, they are hidden.  Unfortunately, there are numerous tools available to see these requests.  Web Proxies are the most popular, although there are some browser add-ins that also may do the trick.  The two proxies I used the most are Burp and Fiddler.  Both of these tools are used to intercept and record all of the HTTP/HTTPS traffic between your browser and the server.  This means that even though I might not see the URL or the request in the browser, I will see it in the proxy. 

Many years ago, I came across this exact scenario.  We had a website that used frames and one of the framed pages used parameters on the querystring.  This parameter just happened to be vulnerable to SQL Injection.  The developer didn’t think anything of this because you never saw the parameters.  All it took was just breaking the page out of the frame and directly modifying the value in the address bar, or using a proxy to capture and manipulate the request to successfully exploit the flaw. 

It was easy to fix the problem, as the developer updated his SQL to use parameterized queries instead of dynamic SQL, but this just shows how easy it is to overlook issues that can’t be seen.

In another instance, I saw a page that allowed downloading files based on a querystring value.  During normal use of the application you might never see that request, just the file downloading.  Using a proxy, it was easy to not only identify the request, but identify that it was a great candidate for finding a security flaw.  When we reported the flaw, I was asked multiple times about how I found the flaw.  It was another case where the developer thought that because the request wasn’t displayed anywhere, no one would find it.

There are many requests that go unseen in the address bar of the web browser.  We must remember that as developers, it is our responsibility to properly secure all of our code, whether or not they appear in the address bar.  I encourage all developers to use web proxies while testing and developing their applications so they can be aware of all the transactions that are actually made. 

ASP.Net and CSRF

January 7, 2013 by · Comments Off on ASP.Net and CSRF
Filed under: Development, Security 

Cross-site request forgery (CSRF) is a very common vulnerability today.  Like most frameworks, ASP.Net is not immune by default.  There are some features that are built-in that can be enabled to help reduce the surface area of this attack, however we need to be aware of how they work and what situations they may not work in.  First, lets provide a quick review of what CSRF is.

CRSF Overview

CSRF is really a result of the browser’s willingness to submit cookies to the server they are associated with.  Of course, this has to happen so that your sessions stay active and you don’t have to enter your credentials on every request.  The problem is when the request to one site (say site A) is made by a second site (say site B) without your permission or interaction.  Lets take a look at a quick example of what we are talking about. 

Say we have a site that allows deleting users by the administrator.   The administrator is provided with a table listing all the users and a link to delete each one.   That link looks something like this:

http://www.jardinesoftware.com/delete.aspx?userid=6

When an administrator clicks this link, the application first checks to see if he is authenticated (using the session or authentication cookie) and authorized.  It then calls the functions responsible to delete the user with the id of 6.

The issue here, is that there is nothing unique in this request.  If another users (without administrative rights) learns of this URL then it may be possible to get an administrator to run it for him.  Here is how that works:

  1. The attacker crafts the request he wants to be executed by the administrator.  In this case, it could be http://www.jardinesoftware.com/delete.aspx?userid=8
  2. The attacker sends the administrative email an email containing this link (probably obfuscated so they don’t realize it is calling the delete function on the website.  The attacker could also send a link to a seemingly innocent site, but contain an image tag containing a src attribute pointing to the delete link above.
  3. The victim (administrator) MUST already be logged into the application when he clicks the link. 
  4. The browser sees the request for JardineSoftware.com and kindly appends the cookies for that domain.
  5. The server receives the cookies (for authentication) and the request and processes it.  The server has no way of knowing that the user didn’t actually initiate the request on purpose.
  6. The user is deleted and the admin is none the wiser.

To resolve this, we need something to make the request unique.  When the request is unique, it is difficult for the attacker to know that unique value for the victim so the request will fail.  With the presence of XSS, many times this can be bypassed.  There are many different ways to do this, which we will cover next.  Keep in mind that each solution described below has its pros and cons that we must be aware of.

ViewState

.Net web forms have a feature called ViewState which allows storing state information on the client.  You typically don’t see this, because it is a hidden field (unless you view source).  ViewState is not guaranteed to be unique across users.  At times, there may be unique values in there that we don’t think about (the user name), but for the most part, ViewState is not sufficient to protect against CSRF.  Enter ViewStateUserKey.  ViewStateUserKey provides a unique value within the Viewstate per user session.  ViewStateUserKey is not enabled by default, and must be set by the developer.  This property can be set in the Page_Init event on each page or in the master page.  The following is an example of how this can be set:

protected void Page_Init(object sender, EventArgs e)
{
        Page.ViewStateUserKey = Session.SessionId;
}

Microsoft has made some changes in Visual Studio 2012.  If you create a new Web Forms application, it will include some additional CSRF changes to help mitigate the issue out of the box.  The following shows an example of the new Master Page Init method (non-relevant code has been removed):

protected void Page_Init(object sender, EventArgs e)
{
    // The code below helps to protect against XSRF attacks
    var requestCookie = Request.Cookies[AntiXsrfTokenKey];

    if (requestCookie != null && Guid.TryParse(requestCookie.Value, out requestCookieGuidValue))
    {
        // Use the Anti-XSRF token from the cookie
        _antiXsrfTokenValue = requestCookie.Value;
        Page.ViewStateUserKey = _antiXsrfTokenValue;
    }
    else
    {
        // Generate a new Anti-XSRF token and save to the cookie
        _antiXsrfTokenValue = Guid.NewGuid().ToString("N");
        Page.ViewStateUserKey = _antiXsrfTokenValue;
    }
}

In the code above, we can see that the ViewStateUserKey is now being set in the Master Page by default.  What a great addition.  So what are the limits here?

For starters, and hopefully this is obvious, this technique doesn’t work on requests that don’t use ViewState.  Remember the example we used earlier in the post?  There is no ViewState there, so this doesn’t offer any protection for that situation.  There are also some other situations that could lead to this not working.  In .Net 2.0, with EventValidation disabled, ViewStateUserKey would not get validated if the ViewState is empty.   I have discussed the ability to pass an empty viewstate before, and this is one of the perks.  Many times, ViewState may be present, but the developers do not need it for the processing of the request.   If we pass it as __VIEWSTATE=   with no content then in 2.0, ViewStateUserKey will not get checked.   This was changed in .Net 4.0 where the framework now checks if ViewStateUserKey was set and will check it even if the ViewState is empty.  This doesn’t effect requests that don’t use ViewState like the example above.

Nonce or Anti-Forgery Token

Another technique that can be used to protect requests from CSRF is what is called a ‘Nonce’.  A Nonce is a single use token that gets included with every request.  This token is only known to the user and changes for each request.  The idea is that only the requestor of the page with have a valid token to submit the action.  In our example above, a new parameter would need to exist such as this:

http://www.jardinesoftware.com/delete.aspx?userid=6&antiforgery=GJ38r4Elke7823SERw

Yes, that value for the antiforgery parameter is just made up.  It should be random so it is not guessable by an attacker.  This would limit an attacker’s ability to know what YOUR request is to the resource.  This is a great way to mitigate CSRF, but can be tricky to implement.  ASP.Net MVC has built in functionality for this.  For Web forms, you either have to build it, or you can look to OWASP at their CSRFGuard project.   I am not sure how stable it is for .Net, but it could be a good starting point.

In Visual Studio 2012, the default WebForm application template attempts to add anti-csrf functionality in to the master page.  The following code snippet shows some of the code that does this. (Code has been removed that is not relevant and there is supporting code that is not present from other functions):

protected void Page_Init(object sender, EventArgs e)
{
    // The code below helps to protect against XSRF attacks
    var requestCookie = Request.Cookies[AntiXsrfTokenKey];
    Guid requestCookieGuidValue;
    if (requestCookie != null && Guid.TryParse(requestCookie.Value, out requestCookieGuidValue))
    {
        // Use the Anti-XSRF token from the cookie
        _antiXsrfTokenValue = requestCookie.Value;
    }
    else
    {
        // Generate a new Anti-XSRF token and save to the cookie
        _antiXsrfTokenValue = Guid.NewGuid().ToString("N");

        var responseCookie = new HttpCookie(AntiXsrfTokenKey)
        {
            HttpOnly = true,
            Value = _antiXsrfTokenValue
        };
        if (FormsAuthentication.RequireSSL && Request.IsSecureConnection)
        {
            responseCookie.Secure = true;
        }
        Response.Cookies.Set(responseCookie);
    }
}

In the code above, we can see the anti-csrf value being generated and stored in the cookies collection.  Not shown, the anti-csrf value is also stored in the ViewState as well.  This does show that Microsoft is making an attempt to help developer’s protect their applications by providing default implementations like this.

Require Credentials

Another technique that is often employed is to require the user to re-enter their credentials before performing a sensitive transaction.  In the example above, when the administrator clicked the link, he would need to enter his password before the delete occurred.  This is a fairly effective approach because the attacker doesn’t know the administrator’s password (I hope).  The downside to this technique is that users may get upset if they constantly have to re-enter their credentials.  By placing too much burden on the users, they may decide to use something else. This must be used sparingly. 

CAPTCHA

CAPTCHAs can also be used to help protect against CSRF.  Again, this is a unique value per user that the attacker should not know.  Like the Credentials solution, it does require more work by the user.  Check out Rafal Los’ post on Is unusable the same as ‘secure’? Why security is borked.  where he points out the difficulties with a CAPTCHA system when human’s can’t read them.

Use POST Requests

This is not really a mitigation, but more of a recommendation.  CSRF can be performed on POST requests too.  I just wanted to mention this here to cover it, but this really doesn’t have much weight.  All an attacker needs to do is get the victim to visit a page with a hidden form containing the attack request and use JavaScript to auto-submit that form behind the scenes.

Check the Referrer

This is similar to the use of POST requests.  It can be bypassed with the proper technologies and isn’t a full solution.  This is another item that needs to be implemented properly.  I have seen situations where this has been implemented on pages that were ok to access directly and it caused issues. 

Conclusion

As you can see, there are many different ways, including more than listed, to protect against CSRF.  Microsoft has implemented some nice new changes into the default Visual Studio 2012 Web Form template to help protect against CSRF by default.  It is important to understand what the implementation is and its limits of protection.  Without this understanding it is easy to overlook a situation where your application could be vulnerable.

Next Page »