XXE DoS and .Net

May 6, 2019 by · Comments Off on XXE DoS and .Net
Filed under: Development, Security 

External XML Entity (XXE) vulnerabilities can be more than just a risk of remote code execution (RCE), information leakage, or server side request forgery (SSRF). A denial of service (DoS) attack is commonly overlooked. However, given a mis-configured XML parser, it may be possible for an attacker to cause a denial of service attack and block your application’s resources. This would limit the ability for a user to access the expected application when needed.

In most cases, the parser can be configured to just ignore any entities, by disabling DTD parsing. As a matter of fact, many of the common parsers do this by default. If the DTD is not processed, then even the denial of service risk should be removed.

For this post, I want to talk about if DTDs are parsed and focus specifically on the denial of service aspect. One of the properties that becomes important when working with .Net and XML is the MaxCharactersFromEntities property.

The purpose of this property is to limit how long the value of an entity can be. This is important because often times in a DoS attempt, the use of expanding entities can cause a very large request with very few actual lines of data. The following is an example of what a DoS attack might look like in an entity.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE foo [ <!ELEMENT foo ANY >
<!ENTITY dos 'dos' >
<!ENTITY dos1 '&dos;&dos;&dos;&dos;&dos;&dos;&dos;&dos;&dos;&dos;&dos;&dos;' >
<!ENTITY dos2 '&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;&dos1;' >
<!ENTITY dos3 '&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;&dos2;' >
<!ENTITY dos4 '&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;&dos3;' >
<!ENTITY dos5 '&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;&dos4;' >
<!ENTITY dos6 '&dos5;&dos5;&dos5;&dos5;&dos5;&dos5;&dos5;&dos5;&dos5;&dos5;&dos5;&dos5;' >]>

Notice in the above example we have multiple entities that each reference the previous one multiple times. This results in a very large string being created when dos6 is actually referenced in the XML code. This would probably not be large enough to actually cause a denial of service, but you can see how quickly this becomes a very large value.

To help protect the XML parser and the application the MaxCharactersFromEntities helps limit how large this expansion can get. Once it reaches the max amount, it will throw a System.XmlXmlException: ‘The input document has exceeded a limit set by MaxCharactersFromEntities’ exception.

The Microsoft documentation (linked above) states that the default value is 0. This means that it is undefined and there is no limit in place. Through my testing, it appears that this is true for ASP.Net Framework versions up to 4.5.1. In 4.5.2 and above, as well as .Net Core, the default value for this property is 10,000,000. This is most likely a small enough value to protect against denial of service with the XmlReader object.

Overview of Web Security Policies

June 27, 2018 by · Comments Off on Overview of Web Security Policies
Filed under: Development, Security, Testing 

A vulnerability was just identified in your website. How would you know?

The process of vulnerability disclosure to an organization is often very difficult to identify. Whether you are offering any type of bounty for security bugs or not, it is important that there is a clear path for someone to notify you of a potential concern.

Unfortunately, the process is different on every application and it can be very difficult to find it. For someone that is just trying to help out, it can be very frustrating as well. Some websites may have a separate security page with contact information. Other sites may just have a security email address on the contact us page. Many sites don’t have any clear indication of how to report such a finding. Maybe we could just use the security@ email address for the organization, but do they have it configured?

In an effort to help standardize how to find this information, there is a draft definition for a method for web security policies. You can read the draft at https://tools.ietf.org/html/draft-foudil-securitytxt-03. The goal of this is to specify a text file in a known path to provide contact information for users to submit potential security concerns.

How it works

The first step is to create a security.txt file to describe your web security policy. This file should be found in the .well-known directory (according to the specifications). This would make your text file found at /.well-known/security.txt. In some circumstances, it may also be found at just /security.txt.

The purpose of pinning down the name of the file and where it should be located is to limit the searching process. If someone finds an issue, they know where to go to find the right contact information or process.

The next step is to put the relevant information into the security.txt file. The draft documentation covers this in depth, but I want to give a quick example of what this may look like:

Security.txt

— Start of File —

# This is a sample security.txt file
contact: mailto:james@developsec.com
contact: tel:+1-904-638-5431

# Encryption - This links to my public PGP Key
Encryption: https://www.jardinesoftware.com/jamesjardine-public.txt

# Policy - Links to a policy page outlining what you are looking for
Policy: https://www.jardinesoftware.com/security-policy

# Acknowledgments - If you have a page that acknowledges users that have submitted a valid bug
Acknowledgments: https://www.jardinesoftware.com/acknowledgments

# Hiring - if you offer security related jobs, put the link to that page here
Hiring: https://www.jardinesoftwarre.com/jobs

# Signature - To help secure your file, create a signature file and reference it here.
Signature: https://www.jardinesoftware.com/.well-known/security.txt.sig

—- End of File —

I included some comments in that sample above to show what each item is for. A key point is that very little policy information is actually included in the file, rather it is linked as a reference. For example, the PGP key is not actually embedded in the file, but instead the link to the key is referenced.

The goal of the file is to be in a well defined location and provide references to your different security policies and procedures.

WHAT DO YOU THINK?

So I am curious, what do you think about this technique? While it is still in draft status, it is an interesting concept. It allows providing a known path for organizations to follow to provide this type of information.

I don’t believe it is a requirement to create bug bounty programs, or even promote the security testing of your site without permission. However, it does at least provide a means to share your requests and provide information to someone that does find a flaw and wants to share that information with you.

Will we see this move forward, or do you think it will not catch on? If it is a good idea, what is the best way to raise the awareness of it?

XSS in Script Tag

June 27, 2018 by · Comments Off on XSS in Script Tag
Filed under: Development, Security, Testing 

Cross-site scripting is a pretty common vulnerability, even with many of the new advances in UI frameworks. One of the first things we mention when discussing the vulnerability is to understand the context. Is it HTML, Attribute, JavaScript, etc.? This understanding helps us better understand the types of characters that can be used to expose the vulnerability.

In this post, I want to take a quick look at placing data within a <script> tag. In particular, I want to look at how embedded <script> tags are processed. Let’s use a simple web page as our example.

<html>
	<head>
	</head>
	<body>
	<script>
		var x = "<a href=test.html>test</a>";
	</script>
	</body>
</html>

The above example works as we expect. When you load the page, nothing is displayed. The link tag embedded in the variable is rated as a string, not parsed as a link tag. What happens, though, when we embed a <script> tag?

<html>
	<head>
	</head>
	<body>
	<script>
		var x = "<script>alert(9)</script>";
	</script>
	</body>
</html>

In the above snippet, actually nothing happens on the screen. Meaning that the alert box does not actually trigger. This often misleads people into thinking the code is not vulnerable to cross-site scripting. if the link tag is not processed, why would the script tag be. In many situations, the understanding is that we need to break out of the (“) delimiter to start writing our own JavaScript commands. For example, if I submitted a payload of (test”;alert(9);t = “). This type of payload would break out of the x variable and add new JavaScript commands. Of course, this doesn’t work if the (“) character is properly encoded to not allow breaking out.

Going back to our previous example, we may have overlooked something very simple. It wasn’t that the script wasn’t executing because it wasn’t being parsed. Instead, it wasn’t executing because our JavaScript was bad. Our issue was that we were attempting to open a <script> within a <script>. What if we modify our value to the following:

<html>
	<head>
	</head>
	<body>
	<script>
		var x = "</script><script>alert(9)</script>";
	</script>
	</body>
</html>

In the above code, we are first closing out the original <script> tag and then we are starting a new one. This removes the embedded nuance and when the page is loaded, the alert box will appear.

This technique works in many places where a user can control the text returned within the <script> element. Of course, the important remediation step is to make sure that data is properly encoded when returned to the browser. By default, Content Security Policy may not be an immediate solution since this situation would indicate that inline scripts are allowed. However, if you are limiting the use of inline scripts to ones with a registered nonce would help prevent this technique. This reference shows setting the nonce (https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/script-src).

When testing our applications, it is important to focus on the lack of output encoding and less on the ability to fully exploit a situation. Our secure coding standards should identify the types of encoding that should be applied to outputs. If the encodings are not properly implemented then we are citing a violation of our standards.

Security Tips for Copy/Paste of Code From the Internet

February 6, 2017 by · Comments Off on Security Tips for Copy/Paste of Code From the Internet
Filed under: Development, Security 

Developing applications has long involved using code snippets found through textbooks or on the Internet. Rather than re-invent the wheel, it makes sense to identify existing code that helps solve a problem. It may also help speed up the development time.

Years ago, maybe 12, I remember a co-worker that had a SQL Injection vulnerability in his application. The culprit, code copied from someone else. At the time, I explained that once you copy code into your application it is now your responsibility.

Here, 12 years later, I still see this type of occurrence. Using code snippets directly from the web in the application. In many of these cases there may be some form of security weakness. How often do we, as developers, really analyze and understand all the details of the code that we copy?

Here are a few tips when working with external code brought into your application.

Understand what it does

If you were looking for code snippets, you should have a good idea of what the code will do. Better yet, you probably have an understanding of what you think that code will do. How vigorously do you inspect it to make sure that is all it does. Maybe the code performs the specific task you were set out to complete, but what happens if there are other functions you weren’t even looking for. This may not be as much a concern with very small snippets. However, with larger sections of code, it could coverup other functionality. This doesn’t mean that the functionality is intentionally malicious. But undocumented, unintended functionality may open up risk to the application.

Change any passwords or secrets

Depending on the code that you are searching, there may be secrets within it. For example, encryption routines are common for being grabbed off the Internet. To be complete, they contain hard-coded IVs and keys. These should be changed when imported into your projects to something unique. This could also be the case for code that has passwords or other hard-coded values that may provide access to the system.

As I was writing this, I noticed a post about the RadAsyncUpload control regarding the defaults within it. While this is not code copy/pasted from the Internet, it highlights the need to understand the default configurations and that some values should be changed to help provide better protections.

Look for potential vulnerabilities

In addition to the above concerns, the code may have vulnerabilities in it. Imagine a snippet of code used to select data from a SQL database. What if that code passed your tests of accurately pulling the queries, but uses inline SQL and is vulnerable to SQL Injection. The same could happen for code vulnerable to Cross-Site Scripting or not checking proper authorization.

We have to do a better job of performing code reviews on these external snippets, just as we should be doing it on our custom written internal code. Finding snippets of code that perform our needed functionality can be a huge benefit, but we can’t just assume it is production ready. If you are using this type of code, take the time to understand it and review it for potential issues. Don’t stop at just verifying the functionality. Take steps to vet the code just as you would any other code within your application.

SQL Injection: Calling Stored Procedures Dynamically

October 26, 2016 by · Comments Off on SQL Injection: Calling Stored Procedures Dynamically
Filed under: Development, Security, Testing 

It is not news that SQL Injection is possible within a stored procedure. There have been plenty of articles discussing this issues. However, there is a unique way that some developers execute their stored procedures that make them vulnerable to SQL Injection, even when the stored procedure itself is actually safe.

Look at the example below. The code is using a stored procedure, but it is calling the stored procedure using a dynamic statement.

	conn.Open();
        var cmdText = "exec spGetData '" + txtSearch.Text + "'";
        SqlDataAdapter adapter = new SqlDataAdapter(cmdText, conn);
        DataSet ds = new DataSet();
        adapter.Fill(ds);
        conn.Close();
        grdResults.DataSource = ds.Tables[0];
        grdResults.DataBind();

It doesn’t really matter what is in the stored procedure for this particular example. This is because the stored procedure is not where the injection is going to occur. Instead, the injection occurs when the EXEC statement is concatenated together. The email parameter is being dynamically added in, which we know is bad.

This can be quickly tested by just inserting a single quote (‘) into the search field and viewing the error message returned. It would look something like this:

System.Data.SqlClient.SqlException (0x80131904): Unclosed quotation mark after the character string ”’. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) at System.Data.SqlClient.SqlDataReader.TryConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at

With a little more probing, it is possible to get more information leading us to understand how this SQL is constructed. For example, by placing ‘,’ into the search field, we see a different error message:

System.Data.SqlClient.SqlException (0x80131904): Procedure or function spGetData has too many arguments specified. at System.Data.SqlClient.SqlConnection.

The mention of the stored procedure having too many arguments helps identify this technique for calling stored procedures.

With SQL we have the ability to execute more than one query in a given transaction. In this case, we just need to break out of the current exec statement and add our own statement. Remember, this doesn’t effect the execution of the spGetData stored procedure. We are looking at the ability to add new statements to the request.

Lets assume we search for this:

james@test.com’;SELECT * FROM tblUsers–

this would change our cmdText to look like:

exec spGetData’james@test.com’;SELECT * FROM tblUsers–‘

The above query will execute the spGetData stored procedure and then execute the following SELECT statement, ultimately returning 2 result sets. In many cases, this is not that useful for an attacker because the second table would not be returned to the user. However, this doesn’t mean that this makes an attack impossible. Instead, this turns our attacks more towards what we can Do, not what can we receive.

At this point, we are able to execute any commands against the SQL Server that the user has permission too. This could mean executing other stored procedures, dropping or modifying tables, adding records to a table, or even more advanced attacks such as manipulating the underlying operating system. An example might be to do something like this:

james@test.com’;DROP TABLE tblUsers–

If the user has permissions, the server would drop tblUsers, causing a lot of problems.

When calling stored procedures, it should be done using command parameters, rather than dynamically. The following is an example of using proper parameters:

    conn.Open();
    SqlCommand cmd = new SqlCommand();
    cmd.CommandText = "spGetData";
    cmd.CommandType = CommandType.StoredProcedure;
    cmd.Connection = conn;
    cmd.Parameters.AddWithValue("@someData", txtSearch.Text);
    SqlDataAdapter adapter = new SqlDataAdapter(cmd);
    DataSet ds = new DataSet();
    adapter.Fill(ds);
    conn.Close();
    grdResults.DataSource = ds.Tables[0];
    grdResults.DataBind();

The code above adds parameters to the command object, removing the ability to inject into the dynamic code.

It is easy to think that because it is a stored procedure, and the stored procedure may be safe, that we are secure. Unfortunately, simple mistakes like this can lead to a vulnerability. Make sure that you are properly making database calls using parameterized queries. Don’t use dynamic SQL, even if it is to call a stored procedure.

Does the End of an Iteration Change Your View of Risk?

February 16, 2016 by · Comments Off on Does the End of an Iteration Change Your View of Risk?
Filed under: Development, Security, Testing 

You have been working hard for the past few weeks or months on the latest round of features for your flagship product. You are excited. The team is excited. Then a security test identifies a vulnerability. Balloons deflate and everyone starts to scramble.

Take a breath.

Not all vulnerabilities are created equal and the risk that each presents is vastly different. The organization should already have a process for triaging security findings. That process should be assessing the risk of the finding to determine its impact on the application, organization, and your customers. Some of these flaws will need immediate attention. Some may require holding up the release. Some may pose a lower risk and can wait.

Take the time to analyze the situation.

If an item is severe and poses great risk, by all means, stop what you are doing and fix it. But, what happens when the risk is fairly low. When I say risk, I include in that the ability for it to be exploited. The difficulty to exploit can be a critical factor in what decision you make.

When does the risk of remediation override the risk of waiting until the next iteration?

There are some instances where the risk to remediate so late in the iteration may actually be higher than waiting until the next iteration to resolve the actual issue. But all security vulnerabilities need to be fixed, you say? This is not an attempt to get out of doing work or not resolve issues. However, I believe there are situations where the risk of the exploit is less than the risk of trying to fix it in a chaotic, last minute manner.

I can’t count the number of times I have seen issues arise that appeared to be simple fixes. The bug was not very serious and could only be exploited in a very limited way. For example, the bug required the user’s machine to be compromised to enable exploitation. The fix, however, ended up taking more than a week due to some complications. When the bug appeared 2 days before code freeze there were many discussions on performing a fix, and potentially holding up the release, and moving the remediation to the next iteration.

When we take the time to analyze the risk and exposure of the finding, it is possible to make an educated decision as to which risk is better for the organization and the customers. In this situation, the assumption is that the user’s system would need to be compromised for the exploit to happen. If that is true, the application is already vulnerable to password sniffing or other attacks that would make this specific exploit a waste of time.

Forcing a fix at this point in the game increases the chances of introducing another vulnerability, possibly more severe than the one that we are trying to fix. Was that risk worth it?

Timing can have an affect on our judgement when it comes to resolving security issues. It should not be used as an escape goat or reason not to fix things. When analyzing the risk of an item, make sure you are also considering how that may affect the environment as a whole. This risk may not be directly with the flaw, but indirectly due to how it is fixed. There is no hard and fast rule, exactly the reason why we use a risk based approach.

Engage your information security office or enterprise risk teams to help with the analysis. They may be able to provide a different point of view or insight you may have overlooked.

ViewStateUserKey: ViewStateMac Relationship

November 26, 2013 by · Comments Off on ViewStateUserKey: ViewStateMac Relationship
Filed under: Development, Security, Testing 

I apologize for the delay as I recently spoke about this at the SANS Pen Test Summit in Washington D.C. but haven’t had a chance to put it into a blog. While I was doing some research for my presentation on hacking ASP.Net applications I came across something very interesting that sort of blew my mind. One of my topics was ViewStateUserKey, which is a feature of .Net to help protect forms from Cross-Site Request Forgery. I have always assumed that by setting this value (it is off by default) that it put a unique key into the view state for the specific user. Viewstate is a client-side storage mechanism that the form uses to help maintain state.

I have a previous post about ViewStateUserKey and how to set it here: http://www.jardinesoftware.net/2013/01/07/asp-net-and-csrf/

While I was doing some testing, I found that my ViewState wasn’t different between users even though I had set the ViewStateUserKey value. Of course it was late at night.. well ok, early morning so I thought maybe I wasn’t setting it right. But I triple checked and it was right. Upon closer inspections, my view state was identical between my two users. I was really confused because as I mentioned, I thought it put a unique value into the view state to make the view state unique.

My Problem… ViewStateMAC was disabled. But wait.. what does ViewStateMAC have to do with ViewStateUserKey? That is what I said. So I started digging in with Reflector to see what was going on. What did I find? The ViewStateUserKey is actually used to modify the ViewStateMac modifier. It doesn’t store a special value in the ViewState.. rather it modifies how the MAC is generated to protect thew ViewState from Parameter Tampering.

So this does work*. If the MAC is different between users, then the ViewState is ultimately different and the attacker’s value is different from the victim’s. When the ViewState is submitted, the MAC’s won’t match which is what we want.

Unfortunately, this means we are relying again on ViewStateMAC being enabled. Don’t get me wrong, I think it should be enabled and this is yet another reason why. Without it, it doesn’t appear that the ViewStateUserKey doesn’t anything. We have been saying for the longest time that to protect against CSRF set the ViewStateUserKey. No one has said it relies on ViewStateMAC though.

To Recap.. Things that rely on ViewStateMAC:

  • ViewState
  • Event Validation
  • ViewStateUserKey

It is important that we understand the framework features as disabling one item could cause a domino effect of other items. Be secure.

2012 in Review

December 31, 2012 by · Comments Off on 2012 in Review
Filed under: Development, Security, Testing 

Well here it is, 2012 is coming to an end and I thought I would wish everyone happy holidays, as well as mention some of the topics covered this year on my blog.

The year started out with a few issues in the ASP.Net framework. We saw a Forms Authentication Bypass that was patched at the very end of 2011 and an ASP.Net Insecure Redirect issue. Both of these issues show exactly why it is important to keep your frameworks patched.

Next, I did a lot of discussions about ViewStateMAC and EventValidation. This was some new stuff mixed in with some old. We learned that ViewStateMAC also protects the EventValidation field from being tampered with. I couldn’t find any MSDN documentation that states this fact. In addition, I showed how it is possible to manipulate the EventValidation field (when ViewStateMAC is not enabled) to tamper with the application. Here are some links to those posts:

I also created the ASP.Net Webforms CSRF Workflow, which is a small diagram to determine possible CSRF vulnerabilities with an ASP.Net web form application.

The release of .Net 4.5 was fairly big and some of the enhancements are really great. One of those, was the change in how Request Validation works. Adding the ability for lazy validation increases the ability to limit what doesn’t get validated. In addition, ModSecurity was released for IIS.

The release of the Web.Config Security Analyzer happened early on in the year. It is a simple tool that can be used to scan a web.config file for common security misconfigurations.

Some other topics covered included .Net Validators (lets not forget the check for Page.IsValid), Forms Authentication Remember Me functionality, how the Request Method can matter, and a Request Validation Bypass technique.

I discussed how XSS can be performed by tampering with the ViewState and the circumstances needed for it to be possible. This is commonly overlooked by both developers and testers.

In addition, I have created a YouTube channel for creating videos of some of these demonstrations. There are currently two videos available, but look forward to more coming in 2013.

There is a lot to look forward to in 2013 and I can’t wait to get started. Look for more changes and content coming out of Jardine Software and its resources.

I hope everyone had a great year in 2012 and that 2013 brings better things to come.

ViewState XSS: What’s the Deal?

September 17, 2012 by · Comments Off on ViewState XSS: What’s the Deal?
Filed under: Development, Security, Testing 

Many of my posts have discussed some of the protections that ASP.Net provides by default.  For example, Event Validation, ViewStateMac, and ViewStateUserKey.  So what happens when we are not using these protections?  Each of these have a different effect on what is possible from an attacker’s stand point so it is important to understand what these features do for us.  Many of these are covered in prior posts.  I often get asked the question “What can happen if the ViewState is not properly protected?”  This can be a difficult question because it depends on how it is not protected, and also how it is used.  One thing that can possibly be exploited is Cross-site Scripting (XSS).  This post will not dive into what XSS is, as there are many other resources that do that.  Instead, I will show how an attacker could take advantage of reflective XSS by using unprotected ViewState.

For this example, I am going to use the most basic of login forms.  The form doesn’t even actually work, but it is functional enough to demonstrate how this vulnerability could be exploited.  The form contains a user name and password textboxes, a login button, and an asp.net label control that displays copyright information.  Although probably not very obvious, our attack vector here is going to be the copyright label.

Why the Label?

You may be wondering why we are going after the label here.  The biggest reason is that the developers have probably overlooked output encoding on what would normally be pretty static text.  Copyrights do not change that often, and they are usually loaded in the initial page load.  All post-backs will then just re-populate the data from the ViewState.  That is our entry. Here is a quick look at what the page code looks like:

 1: <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent">
 2:     <span>UserName:</span><asp:TextBox ID="txtUserName" runat="server" />
 3:     <br />
 4:     <span>Password:</span><asp:TextBox ID="txtPassword" runat="server" TextMode="Password" />
 5:     <br />
 6:     <asp:Button ID="cmdSubmit" runat="server" Text="Login" /><br />
 7:     <asp:Label ID="lblCopy" runat="server" />
 8: </asp:Content>

We can see on line 7 that we have the label control for the copyright data.   Here is the code behind for the page:

 1: protected void Page_Load(object sender, EventArgs e)
 2: {
 3:     if (!Page.IsPostBack)
 4:     {
 5:         lblCopy.Text = "Copy 2012 Test Company";
 6:     }
 7: }

Here you can see that only on initial page load, we set the copy text.  On Postback, this value is set from the ViewState.

The Attack

Now that we have an idea of what the code looks like, lets take a look at how we can take advantage of this.  Keep in mind there are many factors that go into this working so it will not work on all systems.

I am going to use Fiddler to do the attack for this example.  In most of my posts, I usually use Burp Suite, but there is a cool ViewState Decoder that is available for Fiddler that I want to use here.  The following screen shows the login form on the initial load:

I will set up Fiddler to break before requests so I can intercept the traffic.  When I click the login button, fiddler will intercept the request and wait for me to fiddle with the traffic.  The next screen shows the traffic intercepted.  Note that I have underlined the copy text in the view state decoder.  This is where we are going to make our change.

The attack will load in a simple alert box to demonstrate the presence of XSS.  To load this in the ViewState Decoder’s XML format, I am going to encode the attack using HTML Entities.  I used the encoder at http://ha.ckers.org/xss.html to perform the encoding.  The following screen shows the data encoded in the encoder:

I need to copy this text from the encoder and paste it into the copy right field in the ViewState decoder window.  The following image shows this being done:

Now I need to click the “Encode” button for the ViewState.  This will automatically update the ViewState field for this request.   Once I do that, I can “Resume” my request and let it complete.   When the request completes, I will see the login page reload, but this time it will pop up an alert box as shown in the next screen:

This shows that I was able to perform an XSS attack by manipulating a ViewState parameter.  And as I mentioned earlier, this is reflected since it is being reflected from the ViewState.  Win for the Attacker.

So What, I Can Attack Myself

Often times, when I talk about this technique, the first response is that the attacker could only run XSS against themselves since this is in the ViewState.  How can we get that to our victim.  The good news for the attacker…. .Net is going to help us attack our victims here.  Without going into the details, the premise is that .Net will read the ViewState value from the GET or POST depending on the request method.  So if we send a GET it will read it from the querystring.   So if we make the following request to the page, it will pull the ViewState values from the QueryString and execute the XSS just like the first time we ran it:

http://localhost:51301/Default.aspx?__VIEWSTATE=%2fwEPDwU
KLTE0NzExNjI2OA9kFgJmD2QWAgIDD2QWAgIFD2QWAgIHDw8WAh4
EVGV4dAUlQ29weTxzY3JpcHQ%2bYWxlcnQoOSk7PC9zY3JpcHQ%2b
Q29tcGFueWRkZA%3d%3d&ctl00%24MainContent%24txtUserName=
&ctl00%24MainContent%24txtPassword=
&ctl00%24MainContent%24cmdSubmit=Login

Since we can put this into a GET request, it is easier to send this out in phishing emails or other payloads to get a victim to execute the code.  Yes, as a POST, we can get a victim to run this as well, but we are open to so much more when it is a GET request since we don’t have to try and submit a form for this to work.

How to Fix It

Developers can fix this issue quite easily.  They need to encode the output for starters.  For the encoding to work, however, you should set the value yourself on postback too.  So instead of just setting that hard-coded value on initial page load, think about setting it every time.  Otherwise the encoding will not solve the problem.  Additionally, enable the built in functions like ViewStateMac, which will help prevent an attacker from tampering with the ViewState, or consider encrypting the ViewState.

Final Thoughts

This is a commonly overlooked area of security for .Net developers because there are many assumptions and mis-understandings about how ViewState works in this scenario.  The complexity of configuration doesn’t help either.  Many times developers think that  since it is a hard-coded value.. it can’t be manipulated.   We just saw that under the right circumstances, it very well can be manipulated.

As testers, we need to look for this type of vulnerability and understand it so we can help the developers understand the capabilities of it and how to resolve it.  As developers, we need to understand our development language and its features so we don’t overlook these issues.  We are all in this together to help decrease the vulnerabilities available in the applications we use.

Updated [11/12/2012]: Uploaded a video demonstrating this concept.

ASP.Net Webforms CSRF Workflow

February 7, 2012 by · Comments Off on ASP.Net Webforms CSRF Workflow
Filed under: Security, Testing 

An important aspect of application security is the ability to verify whether or not vulnerabilities exist in the target application.  This task is usually outsourced to a company that specializes in penetration testing or vulnerability assessments.  Even if the task is performed internally, it is important that the testers have as much knowledge about vulnerabilities as possible.  It is often said that a pen test is just testing the tester’s capabilities.  In many ways that is true.  Every tester is different, each having different techniques, skills, and strengths. Companies rely on these tests to assess the risk the application poses to the company.

In an effort to help add knowledge to the testers, I have put together a workflow to aid in testing for Cross Site Request Forgery (CSRF) vulnerabilities.  This can also be used by developers to determine if, by their settings, their application may be vulnerable.  This does not cover every possible configuration, but focuses on the most common.  The workflow can be found here: CSRF Workflow.  I have also included the full link below.

Full Link: http://www.jardinesoftware.com/Documents/ASP_Net_Web_Forms_CSRF_Workflow.pdf

Happy Testing!!

 

The information is provided as-is and is for educational purposes only.  Jardine Software is not liable or responsible for inappropriate use of this information.

Next Page »