SQL Injection: Calling Stored Procedures Dynamically

October 26, 2016 by · Comments Off on SQL Injection: Calling Stored Procedures Dynamically
Filed under: Development, Security, Testing 

It is not news that SQL Injection is possible within a stored procedure. There have been plenty of articles discussing this issues. However, there is a unique way that some developers execute their stored procedures that make them vulnerable to SQL Injection, even when the stored procedure itself is actually safe.

Look at the example below. The code is using a stored procedure, but it is calling the stored procedure using a dynamic statement.

	conn.Open();
        var cmdText = "exec spGetData '" + txtSearch.Text + "'";
        SqlDataAdapter adapter = new SqlDataAdapter(cmdText, conn);
        DataSet ds = new DataSet();
        adapter.Fill(ds);
        conn.Close();
        grdResults.DataSource = ds.Tables[0];
        grdResults.DataBind();

It doesn’t really matter what is in the stored procedure for this particular example. This is because the stored procedure is not where the injection is going to occur. Instead, the injection occurs when the EXEC statement is concatenated together. The email parameter is being dynamically added in, which we know is bad.

This can be quickly tested by just inserting a single quote (‘) into the search field and viewing the error message returned. It would look something like this:

System.Data.SqlClient.SqlException (0x80131904): Unclosed quotation mark after the character string ”’. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) at System.Data.SqlClient.SqlDataReader.TryConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at

With a little more probing, it is possible to get more information leading us to understand how this SQL is constructed. For example, by placing ‘,’ into the search field, we see a different error message:

System.Data.SqlClient.SqlException (0x80131904): Procedure or function spGetData has too many arguments specified. at System.Data.SqlClient.SqlConnection.

The mention of the stored procedure having too many arguments helps identify this technique for calling stored procedures.

With SQL we have the ability to execute more than one query in a given transaction. In this case, we just need to break out of the current exec statement and add our own statement. Remember, this doesn’t effect the execution of the spGetData stored procedure. We are looking at the ability to add new statements to the request.

Lets assume we search for this:

james@test.com’;SELECT * FROM tblUsers–

this would change our cmdText to look like:

exec spGetData’james@test.com’;SELECT * FROM tblUsers–‘

The above query will execute the spGetData stored procedure and then execute the following SELECT statement, ultimately returning 2 result sets. In many cases, this is not that useful for an attacker because the second table would not be returned to the user. However, this doesn’t mean that this makes an attack impossible. Instead, this turns our attacks more towards what we can Do, not what can we receive.

At this point, we are able to execute any commands against the SQL Server that the user has permission too. This could mean executing other stored procedures, dropping or modifying tables, adding records to a table, or even more advanced attacks such as manipulating the underlying operating system. An example might be to do something like this:

james@test.com’;DROP TABLE tblUsers–

If the user has permissions, the server would drop tblUsers, causing a lot of problems.

When calling stored procedures, it should be done using command parameters, rather than dynamically. The following is an example of using proper parameters:

    conn.Open();
    SqlCommand cmd = new SqlCommand();
    cmd.CommandText = "spGetData";
    cmd.CommandType = CommandType.StoredProcedure;
    cmd.Connection = conn;
    cmd.Parameters.AddWithValue("@someData", txtSearch.Text);
    SqlDataAdapter adapter = new SqlDataAdapter(cmd);
    DataSet ds = new DataSet();
    adapter.Fill(ds);
    conn.Close();
    grdResults.DataSource = ds.Tables[0];
    grdResults.DataBind();

The code above adds parameters to the command object, removing the ability to inject into the dynamic code.

It is easy to think that because it is a stored procedure, and the stored procedure may be safe, that we are secure. Unfortunately, simple mistakes like this can lead to a vulnerability. Make sure that you are properly making database calls using parameterized queries. Don’t use dynamic SQL, even if it is to call a stored procedure.

XXE and .Net

May 26, 2016 by · Comments Off on XXE and .Net
Filed under: Development, Security 

XXE, or XML External Entity, is an attack against applications that parse XML. It occurs when XML input contains a reference to an external entity that it wasn’t expected to have access to. Through this article, I will discuss how .Net handles XML for certain objects and how to properly configure these objects to block XXE attacks. It is important to understand that the different versions of the .Net framework handle this differently. I will point out the differences for each object.

I will cover the XmlReader, XmlTextReader, and XMLDocument. Here is a quick summary regarding the default settings:

Object Safe by Default?
XmlReader  
Prior to 4.0 Yes
4.0 + Yes
XmlTextReader  
Prior to 4.0 No
4.0 + No
XmlDocument  
4.5 and Earlier No
4.6 Yes

XMLReader

Prior to 4.0

The ProhibitDtd property is used to determine if a DTD will be parsed.

  • True (default) – throws an exception if a DTD is identified. (See Figure 1)
  • False – Allows parsing the DTD. (Potentially Vulnerable)

Code that throws an exception when a DTD is processed: – By default, ProhibitDtd is set to true and will throw an exception when an Entity is referenced.

static void Reader()
{
    string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";

    XmlReader myReader = XmlReader.Create(new StringReader(xml));
            
    while (myReader.Read())
    {
        Console.WriteLine(myReader.Value);
    }
    Console.ReadLine();
}

Exception when executed:

[Figure 1]

XXE 1

Code that allows a DTD to be processed: – Using the XmlReaderSettings object, it is possible to allow the parsing of the entity. This could make your application vulnerable to XXE.

static void Reader()
{
    string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";

    XmlReaderSettings rs = new XmlReaderSettings();

    rs.ProhibitDtd = false;

    XmlReader myReader = XmlReader.Create(new StringReader(xml),rs);
            
    while (myReader.Read())
    {
        Console.WriteLine(myReader.Value);
    }
    Console.ReadLine();
}

Output when executed showing injected text:

[Figure 2]

XXE 2

.Net 4.0+
In .Net 4.0, they made a change from using the ProhibitDtD property to the new DtdProcessing enumeration. There are now three (3) options:

  • Prohibit (default) – Throws an exception if a DTD is identified.
  • Ignore – Ignores any DTD specifications in the document, skipping over them and continues processing the document.
  • Parse – Will parse any DTD specifications in the document. (Potentially Vulnerable)

Code that throws an exception when a DTD is processed: – By default, the DtdProcessing is set to Prohibit, blocking any external entities and creating safe code.

static void Reader()
{
    string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";

    XmlReader myReader = XmlReader.Create(new StringReader(xml));
            
    while (myReader.Read())
    {
        Console.WriteLine(myReader.Value);
    }
    Console.ReadLine();
}

Exception when executed:

[Figure 3]

XXE 3

Code that ignores DTDs and continues processing: – Using the XmlReaderSettings object, setting DtdProcessing to Ignore will skip processing any entities. In this case, it threw an exception because there was a reference to the entirety that was skipped.

static void Reader()
{
    string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";

    XmlReaderSettings rs = new XmlReaderSettings();
    rs.DtdProcessing = DtdProcessing.Ignore;

    XmlReader myReader = XmlReader.Create(new StringReader(xml),rs);
            
    while (myReader.Read())
    {
        Console.WriteLine(myReader.Value);
    }
    Console.ReadLine();
}

Output when executed ignoring the DTD (Exception due to trying to use the unprocessed entity):

[Figure 4]

XXE 4

Code that allows a DTD to be processed: Using the XmlReaderSettings object, setting DtdProcessing to Parse will allow processing the entities. This potentially makes your code vulnerable.

static void Reader()
{
    string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";
			
    XmlReaderSettings rs = new XmlReaderSettings();
    rs.DtdProcessing = DtdProcessing.Parse;

    XmlReader myReader = XmlReader.Create(new StringReader(xml),rs);
            
    while (myReader.Read())
    {
        Console.WriteLine(myReader.Value);
    }
    Console.ReadLine();           
}

Output when executed showing injected text:

[Figure 5]

XXE 5


XmlTextReader

The XmlTextReader uses the same properties as the XmlReader object, however there is one big difference. The XmlTextReader defaults to parsing XML Entities so you need to explicitly tell it not too.

Prior to 4.0

The ProhibitDtd property is used to determine if a DTD will be parsed.

  • True – throws an exception if a DTD is identified. (See Figure 1)
  • False (Default) – Allows parsing the DTD. (Potentially Vulnerable)

Code that allows a Dtd to be processed: (Potentially Vulnerable) – By default, the XMLTextReader sets the ProhibitDtd property to False, allowing entities to be parsed and the code to potentially be vulnerable.

static void TextReader()
{
    string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";

    XmlTextReader myReader = new XmlTextReader(new StringReader(xml));

    while (myReader.Read())
    {
         if (myReader.NodeType == XmlNodeType.Element)
         {
             Console.WriteLine(myReader.ReadElementContentAsString());
         }
    }
    Console.ReadLine();
}

Code that blocks the Dtd from being parsed and throws an exception: – Setting the ProhibitDtd property to true (explicitly) will block Dtds from being processed making the code safe from XXE. Notice how the XmlTextReader has the ProhibitDtd property directly, it doesn’t have to use the XmlReaderSettings object.

static void TextReader()
{
    string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";

    XmlTextReader myReader = new XmlTextReader(new StringReader(xml));

    myReader.ProhibitDtd = true;

    while (myReader.Read())
    {
       if (myReader.NodeType == XmlNodeType.Element)
       {
           Console.WriteLine(myReader.ReadElementContentAsString());
       }
    }
    Console.ReadLine();
}

4.0+

In .Net 4.0, they made a change from using the ProhibitDtD property to the new DtdProcessing enumeration. There are now three (3) options:

  • Prohibit – Throws an exception if a DTD is identified.
  • Ignore – Ignores any DTD specifications in the document, skipping over them and continues processing the document.
  • Parse (Default) – Will parse any DTD specifications in the document. (Potentially Vulnerable)

Code that allows a DTD to be processed: (Vulnerable) – By default, the XMLTextReader sets the DtdProcessing to Parse, making the code potentially vulnerable to XXE.

static void TextReader()
{
    string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";

    XmlTextReader myReader = new XmlTextReader(new StringReader(xml));

    while (myReader.Read())
    {
        if (myReader.NodeType == XmlNodeType.Element)
        {
            Console.WriteLine(myReader.ReadElementContentAsString());
        }
    }
    Console.ReadLine();
}

Code that blocks the Dtd from being parsed: – To block entities from being parsed, you must explicitly set the DtdProcessing property to Prohibit or Ignore. Note that this is set directly on the XmlTextReader and not through the XmlReaderSettings object.

static void TextReader()
{
    string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";

    XmlTextReader myReader = new XmlTextReader(new StringReader(xml));
			
    myReader.DtdProcessing = DtdProcessing.Prohibit;

    while (myReader.Read())
    {
         if (myReader.NodeType == XmlNodeType.Element)
         {
             Console.WriteLine(myReader.ReadElementContentAsString());
         }
    }
    Console.ReadLine();
}

Output when Dtd is prohibited:

[Figure 6]

XXE 6


XMLDocument

For the XMLDocument, you need to change the default XMLResolver object to prohibit a Dtd from being parsed.

.Net 4.5 and Earlier

By default, the XMLDocument sets the URLResolver which will parse Dtds included in the XML document. To prohibit this, set the XmlResolver = null.

Code that does not set the XmlResolver properly (potentially vulnerable) – The default XMLResolver will parse entities, making the following code potentially vulnerable.

static void Load()
{
   string fileName = @"C:\Users\user\Documents\test.xml";

   XmlDocument xmlDoc = new XmlDocument();

   xmlDoc.Load(fileName);

   Console.WriteLine(xmlDoc.InnerText);

   Console.ReadLine();
}

Code that does set the XmlResolver to null, blocking any Dtds from executing: – To block entities from being parsed, you must explicitly set the XmlResolver to null. This example uses LoadXml instead of Load, but they both work the same in this case.

static void LoadXML()
{
   string xml = "<?xml version=\"1.0\" ?><!DOCTYPE doc 
	[<!ENTITY win SYSTEM \"file:///C:/Users/user/Documents/testdata2.txt\">]
	><doc>&win;</doc>";

   XmlDocument xmlDoc = new XmlDocument();

   xmlDoc.XmlResolver = null;

   xmlDoc.LoadXml(xml);

   Console.WriteLine(xmlDoc.InnerText);

   Console.ReadLine();
}

.Net 4.6

It appears that in .Net 4.6, the XMLResolver is defaulted to Null, making the XmlDocument safe. However, you can still set the XmlResolver in a similar way as prior to 4.6 (see previous code snippet).

.Net EnableHeaderChecking

November 9, 2015 by · Comments Off on .Net EnableHeaderChecking
Filed under: Security 

How often do you take untrusted input and insert it into response headers? This could be in a custom header or in the value of a cookie. Untrusted user data is always a concern when it comes to the security side of application development and response headers are no exception. This is referred to as Response Splitting or HTTP Header Injection.

Like Cross Site Scripting (XSS), HTTP Header Injection is an attack that results from untrusted data being used in a response. In this case, it is in a response header which means that the context is what we need to focus on. Remember that in XSS, context is very important as it defines the characters that are potentially dangerous. In the HTML context, characters like < and > are very dangerous. In Header Injection the greatest concern is over the carriage return (%0D or \r) and new line (%0A or \n) characters, or CRLF. Response headers are separated by CRLF, indicating that if you can insert a CRLF then you can start creating your own headers or even page content.

Manipulating the headers may allow you to redirect the user to a different page, perform cross-site scripting attacks, or even rewrite the page. While commonly overlooked, this is a very dangerous flaw.

ASP.Net has a built in defense mechanism that is enabled by default called EnableHeaderChecking. When EnableHeaderChecking is enabled, CRLF characters are converted to %0D%0A and the browser does not recognize it as a new line. Instead, it is just displayed as characters, not actually creating a line break. The following code snippet was created to show how the response headers look when adding CRLF into a header.

    public partial class _Default : Page
    {
        protected void Page_Load(object sender, EventArgs e)
        {
            Response.AppendHeader("test", "tes%0D%0At\r\ntest2");
            Response.Cookies.Add(new HttpCookie("Test-Cookie", "Another%0D%0ATest\r\nCookie"));
        }
    }

When the application runs, it will properly encode the CRLF as shown in the image below.

EnableHeader1

My next step was to disable EnableHeaderChecking in the web.config file.

	 <httpRuntime targetFramework="4.6" enableHeaderChecking="false"/>

My expectation was that I would get a response that allowed the CRLF and would show me a line break in the response headers. To my surprise, I got the error message below:

EnableHeader2

So why the error? After doing a little Googling I found an article about ASP.Net 2.0 Breaking Changes on IIS 7.0. The item of interest is “13. IIS always rejects new lines in response headers (even if ASP.NET enableHeaderChecking is set to false)”

I didn’t realize this change had been implemented, but apparently, if using IIS 7.0 or above, ASP.Net won’t allow newline characters in a response header. This is actually good news as there is very little reason to allow new lines in a response header when if that was required, just create a new header. This is a great mitigation and with the default configuration helps protect ASP.Net applications from Response Splitting and HTTP Header Injection attacks.

Understanding the framework that you use and the server it runs on is critical in fully understanding your security risks. These built in features can really help protect an application from security risks. Happy and Secure coding!

I did a podcast on this topic which you can find on the DevelopSec podcast.

Potentially Dangerous Request.Path Value was Detected…

November 4, 2015 by · Comments Off on Potentially Dangerous Request.Path Value was Detected…
Filed under: Development, Security 

I have discussed request validation many times when we see the potentially dangerous input error message when viewing a web page. Another interesting protection in ASP.Net is the built-in, on by default, Request.Path validation that occurs. Have you ever seen the error below when using or testing your application?

Pdc1

The screen above occurred because I placed the (*) character in the URL. In ASP.Net, there is a default set of defined illegal characters in the URL. This list is defined by RequestPathInvalidCharacters and can be configured in the web.config file. By default, the following characters are blocked:

  • <
  • >
  • *
  • %
  • &
  • :
  • \\

It is important to note that these characters are blocked from being included in the URL, this does not include the protocol specification or the query string. That should be obvious since the query string uses the & character to separate parameters.

There are not many cases where your URL needs to use any of these default characters, but if there is a need to allow a specific character, you can override the default list. The override is done in the web.config file. The below snippet shows setting the new values (removing the < and > characters:

	<httpruntime requestPathInvalidCharacters="*,%,&,:,\\"/>

Be aware that due the the web.config file being an xml file, you need to escape the < and > characters and set them as &lt; and &gt; respectively.

Remember that modifying the default security settings can expose your application to a greater security risk. Make sure you understand the risk of making these modifications before you perform them. It should be a rare occurrence to require a change to this default list. Understanding the platform is critical to understanding what is and is not being protected by default.

Static Analysis: Analyzing the Options

April 5, 2015 by · Comments Off on Static Analysis: Analyzing the Options
Filed under: Development, Security, Testing 

When it comes to automated testing for applications there are two main types: Dynamic and Static.

  • Dynamic scanning is where the scanner is analyzing the application in a running state. This method doesn’t have access to the source code or the binary itself, but is able to see how things function during runtime.
  • Static analysis is where the scanner is looking at the source code or the binary output of the application. While this type of analysis doesn’t see the code as it is running, it has the ability to trace how data flows the the application down to the function level.

An important component to any secure development workflow, dynamic scanning analyzes a system as it is running. Before the application is running the focus is shifted to the source code which is where static analysis fits in. At this state it is possible to identify many common vulnerabilities while integrating into your build processes.

If you are thinking about adding static analysis to your process there are a few things to think about. Keep in mind there is not just one factor that should be the decision maker. Budget, in-house experience, application type and other factors will combine to make the right decision.

Disclaimer: I don’t endorse any products I talk about here. I do have direct experience with the ones I mention and that is why they are mentioned. I prefer not to speak to those products I have never used.

Budget

I hate to list this first, but honestly it is a pretty big factor in your implementation of static analysis. The vast options that exist for static analysis range from FREE to VERY EXPENSIVE. It is good to have an idea of what type of budget you have at hand to better understand what option may be right.

Free Tools

There are a few free tools out there that may work for your situation. Most of these tools depend on the programming language you use, unlike many of the commercial tools that support many of the common languages. For .Net developers, CAT.Net is the first static analysis tool that comes to mind. The downside is that it has not been updated in a long time. While it may still help a little, it will not compare to many of the commercial tools that are available.

In the Ruby world, I have used Brakeman which worked fairly well. You may find you have to do a little fiddling to get it up and running properly, but if you are a Ruby developer then this may be a simple task.

Managed Services or In-House

Can you manage a scanner in-house or is this something better delegated to a third party that specializes in the technology?

This can be a difficult question because it may involve many facets of your development environment. Choosing to host the solution in-house, like HP’s Fortify SCA may require a lot more internal knowledge than a managed solution. Do you have the resources available that know the product or that can learn it? Given the right resources, in-house tools can be very beneficial. One of the biggest roadblocks to in-house solutions is related to the cost. Most of them are very expensive. Here are a few in-house benefits:

  • Ability to integrate directly into your Continuous Integration (CI) operations
  • Ability to customize the technology for your environment/workflow
  • Ability to create extensions to tune the results

Choosing to go with a managed solution works well for many companies. Whether it is because the development team is small, resources aren’t available or budget, using a 3rd party may be the right solution. There is always the question as to whether or not you are ok with sending your code to a 3rd party or not, but many are ok with this to get the solution they need. Many of the managed services have the additional benefit of reducing false positives in the results. This can be one of the most time consuming pieces of a static analysis tool, right there with getting it set up and configured properly. Some scans may return upwards of 10’s of thousands of results. Weeding through all of those can be very time consuming and have a negative effect on the poor person stuck doing it. Having a company manage that portion can be very beneficial and cost effective.

Conclusion

Picking the right static analysis solution is important, but can be difficult. Take the time to determine what your end goal is when implementing static analysis. Are you looking for something that is good, but not customizable to your environment, or something that is highly extensible and integrated closely with your workflow? Unfortunately, sometimes our budget may limit what we can do, but we have to start someplace. Take the time to talk to other people that have used the solutions you are looking at. Has their experience been good? What did/do they like? What don’t they like? Remember that static analysis is not the complete solution, but rather a component of a solution. Dropping this into your workflow won’t make you secure, but it will help decrease the attack surface area if implemented properly.

A Pen Test is Coming!!

October 18, 2014 by · Comments Off on A Pen Test is Coming!!
Filed under: Development, Security, Testing 

You have been working hard to create the greatest app in the world.  Ok, so maybe it is just a simple business application, but it is still important to you.  You have put countless hours of hard work into creating this master piece.  It looks awesome, and does everything that the business has asked for.  Then you get the email from security: Your application will undergo a penetration test in two weeks.  Your heart skips a beat and sinks a little as you recall everything you have heard about this experience.  Most likely, your immediate action is to go on the defensive.  Why would your application need a penetration test?  Of course it is secure, we do use HTTPS.  No one would attack us, we are small.  Take a breath..  it is going to be alright.

All too often, when I go into a penetration test, the developers start on the defensive.  They don’t really understand why these “other” people have to come in and test their application.  I understand the concerns.   History has shown that many of these engagements are truly considered adversarial.  The testers jump for joy when they find a security flaw.  They tell you how bad the application is and how simple the fix is, leading to you feeling about the size of an ant.  This is often due to a lack of good communication skills. 

Penetration testing is adversarial.  It is an offensive assessment to find security weaknesses in your systems.  This is an attempt to simulate an attacker against your system.  Of course there are many differences, such as scope, timing and rules, but the goal is the same.  Lets see what we can do on your system.  Unfortunately, I find that many testers don’t have the communication skills to relay the information back to the business and developers in a way that is positive.  I can’t tell you how may times I have heard people describe their job as great because they get to come in, tell you how bad you suck and then leave.  If that is your penetration tester, find a new one.  First, that attitude breaks down the communication with the client and doesn’t help promote a secure atmosphere.  We don’t get anywhere by belittling the teams that have worked hard to create their application.  Second, a penetration test should provide solid recommendations to the client on how they can work to resolve the issues identified.  Just listing a bunch of flaws is fairly useless to a company. 

These engagements should be worth everyone’s time.  There should be positive communication between the developers and the testing team.  Remember that many engagements are short lived so the more information you can provide the better the assessment you are going to get.  The engagement should be helpful.  With the right company, you will get a solid assessment and recommendations that you can do something with.  If you don’t get that, time to start looking at another company for testing.  Make sure you are ready for the test.   If the engagement requires an environment to test in, have it all set up.  That includes test data (if needed).   The testers want to hit the ground running.  If credentials are needed, make sure those are available too.  The more help you can be, the more you will benefit from the experience.   

As much as you don’t want to hear it, there is a very high chance the test will find vulnerabilities.  While it would be great if applications didn’t have vulnerabilities, it is fairly rare to find them.  Use this experience to learn and train on security issues. Take the feedback as constructive criticism, not someone attacking you.   Trust me, you want the pen testers to find these flaws before a real attacker does. 

Remember that this is for your benefit.  We as developers also need to stay positive.  The last thing you want to do is challenge the pen testers saying your app is not vulnerable.  The teams that usually do that are the most vulnerable. Say positive and it will be a great learning experience.

ViewStateUserKey: ViewStateMac Relationship

November 26, 2013 by · Comments Off on ViewStateUserKey: ViewStateMac Relationship
Filed under: Development, Security, Testing 

I apologize for the delay as I recently spoke about this at the SANS Pen Test Summit in Washington D.C. but haven’t had a chance to put it into a blog. While I was doing some research for my presentation on hacking ASP.Net applications I came across something very interesting that sort of blew my mind. One of my topics was ViewStateUserKey, which is a feature of .Net to help protect forms from Cross-Site Request Forgery. I have always assumed that by setting this value (it is off by default) that it put a unique key into the view state for the specific user. Viewstate is a client-side storage mechanism that the form uses to help maintain state.

I have a previous post about ViewStateUserKey and how to set it here: http://www.jardinesoftware.net/2013/01/07/asp-net-and-csrf/

While I was doing some testing, I found that my ViewState wasn’t different between users even though I had set the ViewStateUserKey value. Of course it was late at night.. well ok, early morning so I thought maybe I wasn’t setting it right. But I triple checked and it was right. Upon closer inspections, my view state was identical between my two users. I was really confused because as I mentioned, I thought it put a unique value into the view state to make the view state unique.

My Problem… ViewStateMAC was disabled. But wait.. what does ViewStateMAC have to do with ViewStateUserKey? That is what I said. So I started digging in with Reflector to see what was going on. What did I find? The ViewStateUserKey is actually used to modify the ViewStateMac modifier. It doesn’t store a special value in the ViewState.. rather it modifies how the MAC is generated to protect thew ViewState from Parameter Tampering.

So this does work*. If the MAC is different between users, then the ViewState is ultimately different and the attacker’s value is different from the victim’s. When the ViewState is submitted, the MAC’s won’t match which is what we want.

Unfortunately, this means we are relying again on ViewStateMAC being enabled. Don’t get me wrong, I think it should be enabled and this is yet another reason why. Without it, it doesn’t appear that the ViewStateUserKey doesn’t anything. We have been saying for the longest time that to protect against CSRF set the ViewStateUserKey. No one has said it relies on ViewStateMAC though.

To Recap.. Things that rely on ViewStateMAC:

  • ViewState
  • Event Validation
  • ViewStateUserKey

It is important that we understand the framework features as disabling one item could cause a domino effect of other items. Be secure.

Bounties For Fixes

October 11, 2013 by · Comments Off on Bounties For Fixes
Filed under: Security 

It was just recently announced that Google will pay for open-source code security fixes (http://www.computerworld.com/s/article/9243110/Google_to_pay_for_open_source_code_security_fixes). Paying for stuff to happen is nothing new, we have seen Bug Bounty programs popping up in a lot of companies. The idea behind the bug bounty is that people can submit bugs they have found and then possibly get paid for that bug. This has been very successful for some large companies and some bug finders out there.

The difference in this new announcement is that they are paying for people to apply fixes to some open source tools that are widely used. I personally think this is a good thing because it will encourage people to actually start fixing some of the issues that exist. Security is usually bent on finding vulnerabilities, which doesn’t really help fix security at all. It still requires the software developers to implement some sort of change to get that security hole plugged. Here, we see that the push to fix the problem is now being rewarded. This is especially true in open-source projects as many of the people that work on these projects do so voluntarily.

Is there any concern though that this process could be abused? The first thought that comes to mind is people working together where one person plants the bug and the other one fixes it. Not sure how realistic that is, but I am sure there are people thinking about it. What could possibly be more challenging is verifying the fixes. What happens if someone patches something, but they do it incorrectly? Who is testing the fix? How do they verify that it is really fixed properly? If they find later that the fix wasn’t complete, does the fixer have to return the payment? There are always questions to be answered when we look at a new program like this. I am sure that Google has thought about this before rolling it out and I really hope the program works out well. it is a great idea and we need to get more people involved in helping fix some of these issues.

Your Passwords Were Stolen: What’s Your Plan?

May 29, 2013 by · Comments Off on Your Passwords Were Stolen: What’s Your Plan?
Filed under: Development, Security 

If you have been glancing at many news stories this year, you have certainly seen the large number of data breaches that have occurred. Even just today, we are seeing reports that Drupal.org suffered from a breach (https://drupal.org/news/130529SecurityUpdate) that shows unauthorized access to hashed passwords, usernames, and email addresses. Note that this is not a vulnerability in the CMS product, but the actual website for Drupal.org. Unfortunately, Drupal is just the latest to report this issue.

In addition to Drupal, LivingSocial also suffered a huge breach involving passwords. LinkedIn, Evernote, Yahoo, and Name.com have also joined this elite club. In each of these cases, we have seen many different formats for storing the passwords. Some are using plain text (ouch), others are actually doing what has been recommended and using a salted hash. Even with a salted hash, there are still some issues. One, hashes are fast and some hashes are not as strong as others. Bad choices can lead to an immediate failure in implementation and protection.

Going into what format you should store your passwords in will be saved for another post, and has been discusses heavily on the internet. It is really outside the scope of this post, because in this discussion, it is already too late for that. Here, I want to ask the simple question of, “You have been breached, What do you do?”

Ok, Maybe it is not a simple question, or maybe it is. Most of the sites that have seen these breaches are fairly quick to force password resets by all of their users. The idea behind this is that the credentials were stolen, but only the actual user should be able to perform a password reset. The user performs the reset, they have new credentials, and the information that the bad guy got (and everyone else that downloads the stolen credentials) are no good. Or maybe not?? Wait.. you re-use passwords across multiple sites? Well, that makes it more interesting. I guess you now need to reset a bunch of passwords.

Reseting passwords appears to be the standard. I haven’t seen anyone else attempt to do anything else, if you have please share. But what else would work? You can’t just change the algorithm to make it stronger.. the bad guy has the password. Changing the algorithm doesn’t change that fact and they just log in using a stronger algorithm. I guess that won’t work. Might be nice to have a mechanism to upgrade everyone to a stronger algorithm as time goes on though.

So if resetting passwords in mass appears to work and is the standard, do you have a way to do it? if you got breached today, what would you need to do to reset everyone’s password, or at least force a password reset on all users? There are a few options, and of course it depends on how you actually manage user passwords.

If you have a password expiration field in the DB, you could just set all passwords to have expired yesterday. Now everyone will be presented with an expired password prompt. The problem with this solution is if an expired password just requires the old password to set the new password. It is possible the bad guy does this before the actual user. Oops.

You could Just null out or put in a 0 or some false value into all of the password fields. This only works for encrypted or hashed passwords.. not clear text. This could be done with a simple SQL Update statement, just drop that needless where clause ;). When a user goes to log in, they will be unsuccessful because when the password they submit is encrypted or hashed, it will never match the value you updated the field to. This forces them to use the forgot password field.

You could run a separate application that resets everyone password like the previous method, it just doesn’t run a DB Update directly on the server. Maybe you are a control freak as to what gets run against the servers and control that access.

As you can see, there are many ways to do this, but have you really given it any thought? Have you written out your plan that in the event your site gets breached like these other sites, you will be able to react intelligently and swiftly? Often times when we think of incidence response, we think of stopping the attack, but we also have to think about how we would protect our users from future attacks quickly.

These ideas are just a few examples of how you can go about this to help provoke you and your team to think about how you would do this in your situation. Every application is different and this scenario should be in your IR plan. If you have other ways to handle this, please share with everyone.

Hidden Treasures: Not So Hidden

April 5, 2013 by · Comments Off on Hidden Treasures: Not So Hidden
Filed under: Development, Security, Testing 

For years now, I have run into developers that believe that just because a request can’t be seen, it is not vulnerable to flaws.  Wait, what are we talking about here?   What do you mean by a request that can’t be seen?  There are a few different ways that the user would not see a request.  For example, Ajax requests are invisible to users.   What about framed pages?  Except for the main parent, all of the other pages are hidden from the user’s view.  These are what I would call hidden requests. 

Lets use a framed site as an example.  Lets say that a page has multiple frames embedded in it, each leading to a different page of content.  From a user perspective, I would normally only see the URL in the Address Bar of the parent page that contains the other frames.  Although each frame has its own URL, I don’t see it natively.  This type of out-of-site out-of-mind thought process often leads us to over look the security of these pages. 

There are all sorts of vulnerabilities that can be present in the scenario above.  I have seen SQL injection, Direct Object Reference and Authorization issues.  BUT THEY ARE HIDDEN??

To the naked eye, yes, they are hidden.  Unfortunately, there are numerous tools available to see these requests.  Web Proxies are the most popular, although there are some browser add-ins that also may do the trick.  The two proxies I used the most are Burp and Fiddler.  Both of these tools are used to intercept and record all of the HTTP/HTTPS traffic between your browser and the server.  This means that even though I might not see the URL or the request in the browser, I will see it in the proxy. 

Many years ago, I came across this exact scenario.  We had a website that used frames and one of the framed pages used parameters on the querystring.  This parameter just happened to be vulnerable to SQL Injection.  The developer didn’t think anything of this because you never saw the parameters.  All it took was just breaking the page out of the frame and directly modifying the value in the address bar, or using a proxy to capture and manipulate the request to successfully exploit the flaw. 

It was easy to fix the problem, as the developer updated his SQL to use parameterized queries instead of dynamic SQL, but this just shows how easy it is to overlook issues that can’t be seen.

In another instance, I saw a page that allowed downloading files based on a querystring value.  During normal use of the application you might never see that request, just the file downloading.  Using a proxy, it was easy to not only identify the request, but identify that it was a great candidate for finding a security flaw.  When we reported the flaw, I was asked multiple times about how I found the flaw.  It was another case where the developer thought that because the request wasn’t displayed anywhere, no one would find it.

There are many requests that go unseen in the address bar of the web browser.  We must remember that as developers, it is our responsibility to properly secure all of our code, whether or not they appear in the address bar.  I encourage all developers to use web proxies while testing and developing their applications so they can be aware of all the transactions that are actually made. 

Next Page »