F5 BigIP Decode with Fiddler

September 18, 2015 by · Comments Off on F5 BigIP Decode with Fiddler
Filed under: Development, Testing 

There are many tools out there that allow you to decode the F5 BigIP cookie used on some sites. I haven’t seen anything that just plugs into Fiddler if you use that for debugging purposes. One of the reasons you may want to decode the F5 cookie is just that, debugging. If you need to know what server, behind the load balancer, your request is going to to troubleshoot a bug, this is the cookie you need. I won’t go into a long discussion of the F5 Cookie, but you can read more about it I have a description here.

Most of the examples I have seen are using python to do the conversion. I looked for a javascript example, as that is what Fiddler supports in its Custom Rules but couldn’t really find anything. I got to messing around with it and put together a very rough set of functions to be able to decode the cookie value back to its IP address and Port. It sticks the result into a custom column in the Fiddler interface (scroll all the way to the right, the last column). If it identifies the cookie it will attempt to decode it and populate the column. This can be done for both response and request cookies.

To decode response cookies, you need to update the custom Fiddler rules by adding code to the static function OnBeforeResponse(oSession: Session) method. The following code can be inserted at the end of the function:

	var re = /\d+\.\d+\.0{4}/;    // Simple regex to identify the BigIP pattern.
        
        if (oSession.oResponse.headers)
        {
            for (var x:int = 0; x < oSession.oResponse.headers.Count(); x++)
            {
                if(oSession.oResponse.headers[x].Name.Contains("Set-Cookie")){
                    var cookie : Fiddler.HTTPHeaderItem = oSession.oResponse.headers[x];
                    var myArray = re.exec(cookie.Value);
                    if (myArray != null && myArray.length > 0)
                    {
                    
                        for (var i = 0; i < myArray.length; i++)
                        {
                            var index = myArray[i].indexOf(".");
                        
                            var val = myArray[i].substring(0,index);
                        
                            var hIP = parseInt(val).toString(16);
                            if (hIP.length < 8)
                            {
                                var pads = "0";
                                hIP = pads + hIP;
                            }
                            var hIP1 = parseInt(hIP.toString().substring(6,8),16);
                            var hIP2 = parseInt(hIP.toString().substring(4,6),16);
                            var hIP3 = parseInt(hIP.toString().substring(2,4),16);
                            var hIP4 = parseInt(hIP.toString().substring(0,2),16);
                            
                            var val2 = myArray[i].substring(index+1);
                            var index2 = val2.indexOf(".");
                            val2 = val2.substring(0,index2);
                            var hPort = parseInt(val2).toString(16);
                            if (hPort.length < 4)
                            {
                                var padh = "0";
                                hPort = padh + hPort;
                            }
                            var hPortS = hPort.toString().substring(2,4) + hPort.toString().substring(0,2);
                            var hPort1 = parseInt(hPortS,16);
        
                            oSession["ui-customcolumn"] += hIP1 + "." + hIP2 + "." + hIP3 + "." + hIP4 + ":" + hPort1 + "  ";
                       
                        }
                    }
                }
            }
        }

In order to decode the cookie from a request, you need to add the following code to the static function OnBeforeRequest(oSession: Session) method.

        var re = /\d+\.\d+\.0{4}/;      // Simple regex to identify the BigIP pattern.
        oSession["ui-customcolumn"] = "";
        
           
           
        if (oSession.oRequest.headers.Exists("Cookie"))
        {
            var cookie = oSession.oRequest["Cookie"];
            var myArray = re.exec(cookie);
            if (myArray != null && myArray.length > 0)
            {
            
                for (var i = 0; i < myArray.length; i++)
                {
                    var index = myArray[i].indexOf(".");
                
                    var val = myArray[i].substring(0,index);
                
                    var hIP = parseInt(val).toString(16);
                    if (hIP.length < 8)
                    {
                        var pads = "0";
                        hIP = pads + hIP;
                    }
                    var hIP1 = parseInt(hIP.toString().substring(6,8),16);
                    var hIP2 = parseInt(hIP.toString().substring(4,6),16);
                    var hIP3 = parseInt(hIP.toString().substring(2,4),16);
                    var hIP4 = parseInt(hIP.toString().substring(0,2),16);
                    
                    var val2 = myArray[i].substring(index+1);
                    var index2 = val2.indexOf(".");
                    val2 = val2.substring(0,index2);
                    var hPort = parseInt(val2).toString(16);
                    if (hPort.length < 4)
                    {
                        var padh = "0";
                        hPort = padh + hPort;
                    }
                    var hPortS = hPort.toString().substring(2,4) + hPort.toString().substring(0,2);
                    var hPort1 = parseInt(hPortS,16);

                    oSession["ui-customcolumn"] += hIP1 + "." + hIP2 + "." + hIP3 + "." + hIP4 + ":" + hPort1 + "  ";
               
                }
            }
        }

Again, this is a rough compilation of code to perform the tasks. I am well aware there are other ways to do this, but this did seem to work. USE AT YOUR OWN RISK. It is your responsibility to make sure any code you add or use is suitable for your needs. I am not liable for any issues from this code. From my testing, this worked to decode the cookie and didn't present any issues. This is not production code, but an example of how this task could be done.

Just add the code to the custom rules file and visit a site with a F5 cookie and it should decode the value.

Static Analysis: Analyzing the Options

April 5, 2015 by · Comments Off on Static Analysis: Analyzing the Options
Filed under: Development, Security, Testing 

When it comes to automated testing for applications there are two main types: Dynamic and Static.

  • Dynamic scanning is where the scanner is analyzing the application in a running state. This method doesn’t have access to the source code or the binary itself, but is able to see how things function during runtime.
  • Static analysis is where the scanner is looking at the source code or the binary output of the application. While this type of analysis doesn’t see the code as it is running, it has the ability to trace how data flows the the application down to the function level.

An important component to any secure development workflow, dynamic scanning analyzes a system as it is running. Before the application is running the focus is shifted to the source code which is where static analysis fits in. At this state it is possible to identify many common vulnerabilities while integrating into your build processes.

If you are thinking about adding static analysis to your process there are a few things to think about. Keep in mind there is not just one factor that should be the decision maker. Budget, in-house experience, application type and other factors will combine to make the right decision.

Disclaimer: I don’t endorse any products I talk about here. I do have direct experience with the ones I mention and that is why they are mentioned. I prefer not to speak to those products I have never used.

Budget

I hate to list this first, but honestly it is a pretty big factor in your implementation of static analysis. The vast options that exist for static analysis range from FREE to VERY EXPENSIVE. It is good to have an idea of what type of budget you have at hand to better understand what option may be right.

Free Tools

There are a few free tools out there that may work for your situation. Most of these tools depend on the programming language you use, unlike many of the commercial tools that support many of the common languages. For .Net developers, CAT.Net is the first static analysis tool that comes to mind. The downside is that it has not been updated in a long time. While it may still help a little, it will not compare to many of the commercial tools that are available.

In the Ruby world, I have used Brakeman which worked fairly well. You may find you have to do a little fiddling to get it up and running properly, but if you are a Ruby developer then this may be a simple task.

Managed Services or In-House

Can you manage a scanner in-house or is this something better delegated to a third party that specializes in the technology?

This can be a difficult question because it may involve many facets of your development environment. Choosing to host the solution in-house, like HP’s Fortify SCA may require a lot more internal knowledge than a managed solution. Do you have the resources available that know the product or that can learn it? Given the right resources, in-house tools can be very beneficial. One of the biggest roadblocks to in-house solutions is related to the cost. Most of them are very expensive. Here are a few in-house benefits:

  • Ability to integrate directly into your Continuous Integration (CI) operations
  • Ability to customize the technology for your environment/workflow
  • Ability to create extensions to tune the results

Choosing to go with a managed solution works well for many companies. Whether it is because the development team is small, resources aren’t available or budget, using a 3rd party may be the right solution. There is always the question as to whether or not you are ok with sending your code to a 3rd party or not, but many are ok with this to get the solution they need. Many of the managed services have the additional benefit of reducing false positives in the results. This can be one of the most time consuming pieces of a static analysis tool, right there with getting it set up and configured properly. Some scans may return upwards of 10’s of thousands of results. Weeding through all of those can be very time consuming and have a negative effect on the poor person stuck doing it. Having a company manage that portion can be very beneficial and cost effective.

Conclusion

Picking the right static analysis solution is important, but can be difficult. Take the time to determine what your end goal is when implementing static analysis. Are you looking for something that is good, but not customizable to your environment, or something that is highly extensible and integrated closely with your workflow? Unfortunately, sometimes our budget may limit what we can do, but we have to start someplace. Take the time to talk to other people that have used the solutions you are looking at. Has their experience been good? What did/do they like? What don’t they like? Remember that static analysis is not the complete solution, but rather a component of a solution. Dropping this into your workflow won’t make you secure, but it will help decrease the attack surface area if implemented properly.

A Pen Test is Coming!!

October 18, 2014 by · Comments Off on A Pen Test is Coming!!
Filed under: Development, Security, Testing 

You have been working hard to create the greatest app in the world.  Ok, so maybe it is just a simple business application, but it is still important to you.  You have put countless hours of hard work into creating this master piece.  It looks awesome, and does everything that the business has asked for.  Then you get the email from security: Your application will undergo a penetration test in two weeks.  Your heart skips a beat and sinks a little as you recall everything you have heard about this experience.  Most likely, your immediate action is to go on the defensive.  Why would your application need a penetration test?  Of course it is secure, we do use HTTPS.  No one would attack us, we are small.  Take a breath..  it is going to be alright.

All too often, when I go into a penetration test, the developers start on the defensive.  They don’t really understand why these ‘other’ people have to come in and test their application.  I understand the concerns.   History has shown that many of these engagements are truly considered adversarial.  The testers jump for joy when they find a security flaw.  They tell you how bad the application is and how simple the fix is, leading to you feeling about the size of an ant.  This is often due to a lack of good communication skills.

Penetration testing is adversarial.  It is an offensive assessment to find security weaknesses in your systems.  This is an attempt to simulate an attacker against your system.  Of course there are many differences, such as scope, timing and rules, but the goal is the same.  Lets see what we can do on your system.  Unfortunately, I find that many testers don’t have the communication skills to relay the information back to the business and developers in a way that is positive.  I can’t tell you how may times I have heard people describe their job as great because they get to come in, tell you how bad you suck and then leave.  If that is your penetration tester, find a new one.  First, that attitude breaks down the communication with the client and doesn’t help promote a secure atmosphere.  We don’t get anywhere by belittling the teams that have worked hard to create their application.  Second, a penetration test should provide solid recommendations to the client on how they can work to resolve the issues identified.  Just listing a bunch of flaws is fairly useless to a company.

These engagements should be worth everyone’s time.  There should be positive communication between the developers and the testing team.  Remember that many engagements are short lived so the more information you can provide the better the assessment you are going to get.  The engagement should be helpful.  With the right company, you will get a solid assessment and recommendations that you can do something with.  If you don’t get that, time to start looking at another company for testing.  Make sure you are ready for the test.   If the engagement requires an environment to test in, have it all set up.  That includes test data (if needed).   The testers want to hit the ground running.  If credentials are needed, make sure those are available too.  The more help you can be, the more you will benefit from the experience.

As much as you don’t want to hear it, there is a very high chance the test will find vulnerabilities.  While it would be great if applications didn’t have vulnerabilities, it is fairly rare to find them.  Use this experience to learn and train on security issues. Take the feedback as constructive criticism, not someone attacking you.   Trust me, you want the pen testers to find these flaws before a real attacker does.

Remember that this is for your benefit.  We as developers also need to stay positive.  The last thing you want to do is challenge the pen testers saying your app is not vulnerable.  The teams that usually do that are the most vulnerable. Say positive and it will be a great learning experience.

Are Application Security Certifications Worth It?

August 9, 2014 by · Comments Off on Are Application Security Certifications Worth It?
Filed under: Security 

In the IT industry there has always been a debate for and against certifications. This is no different than the age old battle of whether or not a bachelors degree is needed to be good in IT. There are large entities that have made a really good profit off the certification tracks. Not only do you have the people that create the tests, but also all of the testing centers. It is a pretty lucrative business if your cert is popular.

I remember when I first started developing applications there were certifications like the Microsoft Certified series or Sun certifications. Anyone remember doing the BrainBench tests online? The goal was to indicate that you had some base level of knowledge about that technology. This seemed to work for a technology, but so far it doesn’t seem to be catching on in the development world for secure development certifications.

You haven’t heard? There are actually certifications that try to show some expertise in application security. GIAC has a secure coding program for both Java and .Net, both leading to the GSSP certification. ISC2 has the CSSLP certification focused at those that work with developing applications. They don’t feel that wide spread though. Lets look at these two examples.

The GIAC certification focuses mostly on the developer and writing secure code. This is tough because it is a certification for a portion of your job as a developer. Your main goal is writing code so to take the effort to go out and get a certification that is so focused can be deterring, never mind the cost of these certs these days. The other issue is that we are not seeing a wide acceptance in the industry for these certifications. I have not seen many job postings for developers that look for the GSSP, or CSSLP certification or any other secure coding cert. You might see MCP or MCSD, but not security certs. Until we start looking for these in our candidates, there is no reason for developers to take the time to get them.

The ISC2 CSSLP certification is geared less at secure coding, and focused more toward the entire SDLC. This alone may make it even less interesting to a developer to attain because it is not directly related to coding. Sure we are involved in the SDLC, but do we really want some cert that says we are security conscious? I am not saying that certifications are a bad thing. I think they can help show some competence, but there seem to be a lot of barriers to adoption within the developer community with security certifications.

When you look at other security certifications they are more job direct, or encompassing. For example, the Web Application Penetration Tester certifications that are available encompass a role: Web Penetration Tester. In our examples above, there is no GSSP role for a developer.

How do we go about solving the problem? Is there a certification that could actually be broadly adopted in the developer world? Rather than have a separate security certification, should we expect that the other developer certifications would incorporate security? Just because I have the GSSP doesn’t mean I can actually write good programs with no flaws. Would I be more marketable if I had the MCSD and everyone knew that that required secure coding expertise?

Push the major developer certification creators to start requiring more secure coding coverage. We shouldn’t need an extra certification for application security, it should just be a part of what we do every day.

Application Logging: The Next Great Wonder

August 2, 2014 by · Comments Off on Application Logging: The Next Great Wonder
Filed under: Security 

What type of logging do you perform in your applications? Do you just log exceptions? Many places I have worked and many developers I have talked to over the years mostly focus on logging troubleshooting artifacts. Where did the application break, and what may have caused it. We do this because we want to be able to fix the bugs that may crop up that cause our users difficulty in using the application.

While this makes sense to many developers, because it is directly related to the pain the face in troubleshooting, it leaves a lot to be desired. When we think about a security perspective, there is much more that should be considered. The simplest events are successful and unsuccessful authentication attempts. Many developers will say they log the first, but the latter is usually overlooked. In reality, the failed attempts are logged most likely to help with account lockout and don’t server much other purpose. But they do. Those logs can be used to identify brute force attacks against a user’s account.

Other events that are critical include logoff events, password change events and even the access of sensitive data. Not many days go buy that we don’t see word of a breach of data. If your application accesses sensitive data, how do you know who has looked at it? If records are meant to be viewed one at a time, but someone starts pulling hundreds at a time, would you notice? If a breach occurs, are you able to go back into the logs and show what data has been viewed and by who?

Logging and auditing play a critical role in an application and finding the right balance of data stored is somewhat an art. Some people may say that you need to just grab everything. That doesn’t always work. Performance seems to be the first concern that comes to mind. I didn’t say it would be easy to throw a logging plan together.

You have to understand your application and the business that it supports. Information and events that are important to one business may not be as important in another business. That is ok. This isn’t a one-size-fits-all solution. Take the time to analyze your situation and log what feels right. But more thought into it than just troubleshooting. Think about if a breach occurs how you will use that stored data.

In addition to logging the data, there needs to be a plan in place to look at that data. Whether it is an automated tool, or manual (hopefully a mix of the two) you can’t identify something if you don’t look. All too often we see breaches occur and not be noticed for months or even years afterward. In many of these cases if someone had just been looking at the logs, it would have been identified immediately and the risk of the breach could be minimized.

There are tools out there to help with logging in your application, no matter what your platform of choice is. Logging is not usually a bolt on solution, you have to be thinking about it before you build your application. Take the time up front to do this so when something happens, you have all the data you need to protect yourself and your customers.

Future of ViewStateMac: What We Know

December 12, 2013 by · Comments Off on Future of ViewStateMac: What We Know
Filed under: Development, Security, Testing 

The .Net Web Development and Tools Blog just recently posted some extra information about ASP.Net December 2013 Security Updates (http://blogs.msdn.com/b/webdev/archive/2013/12/10/asp-net-december-2013-security-updates.aspx).

The most interesting thing to me was a note near the bottom of the page that states that the next version of ASP.Net will FORBID setting ViewStateMac=false. That is right.. They will not allow it in the next version. So in short, if you have set it to false, start working out how to work it to true before you update.

So why forbid it? Apparently, there was a Remote Code Execution flaw identified that can be exploited when ViewStateMac is disabled. They don’t include a lot of details as to how to perform said exploit, but that is neither here nor there. It is about time that something was critical enough that they have decided to take this property out of the developer’s hands.

Over the years I have written many posts discussing attacking ASP.Net sites, many of which rely on ViewStateMac being disabled. I have written to Microsoft regarding how EventValidation can be manipulated if ViewStateMac is disabled. The response was that they think developers should be using the secure settings. I guess that is different now that there is remote code execution. We have reached a new level.

So what does ViewStateMac protect? There are three things that I am aware of that it protects (search this site for any of these and you will find articles with much more detail):

  • ViewState – protects this from parameter tampering
  • EventValidation – protects this from parameter tampering
  • ViewStateUserKey – Used to protect requests from CSRF

So why do developers disable ViewStateMac? Great question. I believe that in most cases, it is disabled because the application is deployed in a web farm and when the web.config is not configured properly, an error is thrown. When some developers search for the error, many forums recommend disabling the ViewStateMac to fix the problem. Unfortunately, that is WRONG!!. Here is a Microsoft KB article that explains in detail how to properly configure a system to allow ViewStateMac to be enabled (http://support.microsoft.com/kb/2915218).

Is this a good thing? For developers, yes!. This will definitely help increase the protection for ViewState, EventValidation and CSRF if ViewStateUserKey is set. For Penetration Testers, Yes/No. Yes, because we get to say you are doing a good job in this category. No, because some easy pickings are going to be wiped off the plate.

I think this is a pretty bold move by Microsoft to remove control over this, but I do think it is a good thing. This is an important control in the WebForm ecosystem and all too often mis-understood by developers. This should bring many sites one step closer to being a little more secure when this change rolls out.

ViewStateUserKey: ViewStateMac Relationship

November 26, 2013 by · Comments Off on ViewStateUserKey: ViewStateMac Relationship
Filed under: Development, Security, Testing 

I apologize for the delay as I recently spoke about this at the SANS Pen Test Summit in Washington D.C. but haven’t had a chance to put it into a blog. While I was doing some research for my presentation on hacking ASP.Net applications I came across something very interesting that sort of blew my mind. One of my topics was ViewStateUserKey, which is a feature of .Net to help protect forms from Cross-Site Request Forgery. I have always assumed that by setting this value (it is off by default) that it put a unique key into the view state for the specific user. Viewstate is a client-side storage mechanism that the form uses to help maintain state.

I have a previous post about ViewStateUserKey and how to set it here: https://jardinesoftware.net/2013/01/07/asp-net-and-csrf/

While I was doing some testing, I found that my ViewState wasn’t different between users even though I had set the ViewStateUserKey value. Of course it was late at night.. well ok, early morning so I thought maybe I wasn’t setting it right. But I triple checked and it was right. Upon closer inspections, my view state was identical between my two users. I was really confused because as I mentioned, I thought it put a unique value into the view state to make the view state unique.

My Problem… ViewStateMAC was disabled. But wait.. what does ViewStateMAC have to do with ViewStateUserKey? That is what I said. So I started digging in with Reflector to see what was going on. What did I find? The ViewStateUserKey is actually used to modify the ViewStateMac modifier. It doesn’t store a special value in the ViewState.. rather it modifies how the MAC is generated to protect thew ViewState from Parameter Tampering.

So this does work*. If the MAC is different between users, then the ViewState is ultimately different and the attacker’s value is different from the victim’s. When the ViewState is submitted, the MAC’s won’t match which is what we want.

Unfortunately, this means we are relying again on ViewStateMAC being enabled. Don’t get me wrong, I think it should be enabled and this is yet another reason why. Without it, it doesn’t appear that the ViewStateUserKey doesn’t anything. We have been saying for the longest time that to protect against CSRF set the ViewStateUserKey. No one has said it relies on ViewStateMAC though.

To Recap.. Things that rely on ViewStateMAC:

  • ViewState
  • Event Validation
  • ViewStateUserKey

It is important that we understand the framework features as disabling one item could cause a domino effect of other items. Be secure.

Bounties For Fixes

October 11, 2013 by · Comments Off on Bounties For Fixes
Filed under: Security 

It was just recently announced that Google will pay for open-source code security fixes (http://www.computerworld.com/s/article/9243110/Google_to_pay_for_open_source_code_security_fixes). Paying for stuff to happen is nothing new, we have seen Bug Bounty programs popping up in a lot of companies. The idea behind the bug bounty is that people can submit bugs they have found and then possibly get paid for that bug. This has been very successful for some large companies and some bug finders out there.

The difference in this new announcement is that they are paying for people to apply fixes to some open source tools that are widely used. I personally think this is a good thing because it will encourage people to actually start fixing some of the issues that exist. Security is usually bent on finding vulnerabilities, which doesn’t really help fix security at all. It still requires the software developers to implement some sort of change to get that security hole plugged. Here, we see that the push to fix the problem is now being rewarded. This is especially true in open-source projects as many of the people that work on these projects do so voluntarily.

Is there any concern though that this process could be abused? The first thought that comes to mind is people working together where one person plants the bug and the other one fixes it. Not sure how realistic that is, but I am sure there are people thinking about it. What could possibly be more challenging is verifying the fixes. What happens if someone patches something, but they do it incorrectly? Who is testing the fix? How do they verify that it is really fixed properly? If they find later that the fix wasn’t complete, does the fixer have to return the payment? There are always questions to be answered when we look at a new program like this. I am sure that Google has thought about this before rolling it out and I really hope the program works out well. it is a great idea and we need to get more people involved in helping fix some of these issues.

Your Passwords Were Stolen: What’s Your Plan?

May 29, 2013 by · Comments Off on Your Passwords Were Stolen: What’s Your Plan?
Filed under: Development, Security 

If you have been glancing at many news stories this year, you have certainly seen the large number of data breaches that have occurred. Even just today, we are seeing reports that Drupal.org suffered from a breach (https://drupal.org/news/130529SecurityUpdate) that shows unauthorized access to hashed passwords, usernames, and email addresses. Note that this is not a vulnerability in the CMS product, but the actual website for Drupal.org. Unfortunately, Drupal is just the latest to report this issue.

In addition to Drupal, LivingSocial also suffered a huge breach involving passwords. LinkedIn, Evernote, Yahoo, and Name.com have also joined this elite club. In each of these cases, we have seen many different formats for storing the passwords. Some are using plain text (ouch), others are actually doing what has been recommended and using a salted hash. Even with a salted hash, there are still some issues. One, hashes are fast and some hashes are not as strong as others. Bad choices can lead to an immediate failure in implementation and protection.

Going into what format you should store your passwords in will be saved for another post, and has been discusses heavily on the internet. It is really outside the scope of this post, because in this discussion, it is already too late for that. Here, I want to ask the simple question of, “You have been breached, What do you do?”

Ok, Maybe it is not a simple question, or maybe it is. Most of the sites that have seen these breaches are fairly quick to force password resets by all of their users. The idea behind this is that the credentials were stolen, but only the actual user should be able to perform a password reset. The user performs the reset, they have new credentials, and the information that the bad guy got (and everyone else that downloads the stolen credentials) are no good. Or maybe not?? Wait.. you re-use passwords across multiple sites? Well, that makes it more interesting. I guess you now need to reset a bunch of passwords.

Reseting passwords appears to be the standard. I haven’t seen anyone else attempt to do anything else, if you have please share. But what else would work? You can’t just change the algorithm to make it stronger.. the bad guy has the password. Changing the algorithm doesn’t change that fact and they just log in using a stronger algorithm. I guess that won’t work. Might be nice to have a mechanism to upgrade everyone to a stronger algorithm as time goes on though.

So if resetting passwords in mass appears to work and is the standard, do you have a way to do it? if you got breached today, what would you need to do to reset everyone’s password, or at least force a password reset on all users? There are a few options, and of course it depends on how you actually manage user passwords.

If you have a password expiration field in the DB, you could just set all passwords to have expired yesterday. Now everyone will be presented with an expired password prompt. The problem with this solution is if an expired password just requires the old password to set the new password. It is possible the bad guy does this before the actual user. Oops.

You could Just null out or put in a 0 or some false value into all of the password fields. This only works for encrypted or hashed passwords.. not clear text. This could be done with a simple SQL Update statement, just drop that needless where clause ;). When a user goes to log in, they will be unsuccessful because when the password they submit is encrypted or hashed, it will never match the value you updated the field to. This forces them to use the forgot password field.

You could run a separate application that resets everyone password like the previous method, it just doesn’t run a DB Update directly on the server. Maybe you are a control freak as to what gets run against the servers and control that access.

As you can see, there are many ways to do this, but have you really given it any thought? Have you written out your plan that in the event your site gets breached like these other sites, you will be able to react intelligently and swiftly? Often times when we think of incidence response, we think of stopping the attack, but we also have to think about how we would protect our users from future attacks quickly.

These ideas are just a few examples of how you can go about this to help provoke you and your team to think about how you would do this in your situation. Every application is different and this scenario should be in your IR plan. If you have other ways to handle this, please share with everyone.

The Watering Hole: Is it Safe to Drink?

May 7, 2013 by · Comments Off on The Watering Hole: Is it Safe to Drink?
Filed under: Security 

How many times have you been told you have a vulnerability that you just don’t understand  its relevancy?  Cross-Site scripting comes to mind for many people.   Sure, they get the fact that you can execute script in the user’s browser, but often times they really don’t fully understand the impact.  Of course, we determine that impact through risk analysis.  What is the true impact and how much risk does it pose to the affected parties?

Over the years, I have heard numerous companies and previous employers state that no one would attack them because they are too small or that they didn’t have anything that the attackers would want.  I have always disagreed with this statement or theory.  Maybe you are a company that doesn’t contain financial data, or health information.  Maybe you don’t deal with sensitive information at all.  So what is the risk? 

We have to start thinking about more than just the type of data that we hold.  We have to look at the bigger picture.  Who are our clients or users?  Who do we do business with that may have something of interest to an attacker.  Of of the big concerns that have been directed toward these smaller companies is the idea of pivoting.   If I wanted to attack a major bank, would it make sense to attack the bank directly?  Very large banks usually have bigger budgets and theoretically would have stronger security controls in place.  That could be a lot of work to get through that entry point.   But what about that small company, that has a smaller budget, and probably (not always) fewer security controls that does business with that big bank?   Is there an opportunity to compromise the small company and pivot into the larger bank through a B2B channel they have set up?   This is certainly a possibility.

Something newer we are seeing is this idea of a Watering Hole attack.  This focuses more on the “WHO” visits your site.  The idea behind a watering hole attack is that it is a targeted drive by malware type of attack.  Rather than put a malicious payload on a site that EVERYONE accesses, why not target a site that the victim you are tracking frequents.  Think of this as similar to the difference between phishing and spear phishing.  In a phishing attack we send out the attack email in mass, but in spear phishing, we are much more refined in who receives the message.   The same goes for this watering hole attack.

As always, we are witnessing the evolution of these attacks.   Migrating from a broad spreading mechanism to a more targeted one has a lot of benefits.  One is that your specific target is more likely to fall prey.  Two, there is less chance of the attack getting noticed if fewer users actually see it.  We have seen other situations where the attackers have actually built their delivery mechanism to not deliver to know security professionals or researchers based on their IP address to avoid getting noticed as quickly. 

The watering hole is just another example of why security does matter to every website, no matter what your content may be.  Even if the attack isn’t against our servers, but against our users, that can have a serious effect on our businesses.   The next time you hear someone say that they are too small or don’t have any data that attackers may want, think about the watering hole concept and see if you are still a nobody in this world.

« Previous PageNext Page »