ASP.Net Insufficient Session Timeout
Filed under: Development, Security, Testing
A common security concern found in ASP.Net applications is Insufficient Session Timeout. In this article, the focus is not on the ASP.Net session that is not effectively terminated, but rather the forms authentication cookie that is still valid after logout.
How to Test
- User is currently logged into the application.
- User captures the ASPAuth cookie (name may be different in different applications).
- Cookie can be captured using a browser plugin or a proxy used for request interception.
- User saves the captured cookie for later use.
- User logs out of the application.
- User requests a page on the application, passing the previously captured authentication cookie.
- The page is processed and access is granted.
Typical Logout Options
- The application calls FormsAuthentication.Signout()
- The application sets the Cookie.Expires property to a previous DateTime.
Cookie Still Works!!
Following the user process above, the cookie still provides access to the application as if the logout never occurred. So what is the deal? The key is that unlike a true “session” which is maintained on the server, the forms authentication cookie is self contained. It does not have a server side component to stay in sync with. Among other things, the authentication cookie has your username or ID, possibly roles, and an expiration date. When the cookie is received by the server it will be decrypted (please tell me you are using protection = all) and the data extracted. If the cookie’s internal expiration date has not passed, the cookie is accepted and processed as a valid cookie.
So what did FormsAuthentation.Signout() do?
If you look under the hood of the .Net framework, it has been a few years but I doubt much has changed, you will see that FormsAuthentication.Signout() really just removes the cookie from the browser. There is no code to perform any server function, it merely asks the browser to remove it by clearing the value and back-dating the expires property. While this does work to remove the cookie from the browser, it doesn’t have any effect on a copy of the original cookie you may have captured. The only sure way to really make the cookie inactive (before the internal timeout occurs) would be to change your machine key in the web.config file. This is not a reasonable solution.
Possible Mitigations
You should be protecting your cookie by setting the httpOnly and Secure properties. HttpOnly tells the browser not to allow javascript to have access to the cookie value. This is an important step to protect the cookie from theft via cross-site scripting. The secure flag tells the browser to only send the authentication cookie over HTTPS, making it much more difficult for an attacker to intercept the cookie as it is sent to the server.
Set a short timeout (15 minutes) on the cookie to decrease the window an attacker has to obtain the cookie.
You could attempt to build a tracking system to manage the authentication cookie on the server to disable it before its time has expired. Maybe something for another post.
Understand how the application is used to determine how risky this issue may be. If the application is not used on shared/public systems and the cookie is protected as mentioned above, the attack surface is significantly decreased.
Final Thoughts
If you are facing this type of finding and it is a forms authentication cookie issue, not the Asp.Net session cookie, take the time to understand the risk. Make sure you understand the settings you have and the priority and sensitivity of the application to properly understand “your” risk level. Don’t rely on third party risk ratings to determine how serious the flaw is. In many situations, this may be a low priority, however in the right app, this could be a high priority.
EMV Chip cards: Overview
When you shop at a store with a credit card it is typically done by swiping your card to conduct the transaction. The swiping action allows the credit card terminal to read your credit card number off of a magnetic strip on the back of the card. The downside to the magnetic strip technology is that it is very easy and cheap for the bad guys to reproduce. A common way for the bad actor to get your credit card number is to use a skimmer, a small device that goes over the normal reading mechanism, to steal the data. Once this is done, they can then create a new card with the same information and then attempt to use it at a store.
What is an EMV Card?
In an effort to reduce card present fraud, cards are making the switch to EMV (Europay, MasterCard, Visa) cards. EMV cards use a smart chip, rather than the magnetic stripe, to provide the information needed to complete the transaction. Unlike the old magnetic cards that you swipe, the EMV cards are “dipped” by inserting the chip side of the card into the card reader. The card stays in the device for a few seconds until the transaction is complete. One of the features of the chip is that it sends a unique token for use with each transaction making it harder for a malicious user to replicate the data, even if it is retrieved.
The EMV cards are also much more costly to create, which adds to the complexity for a theif creating fraudulent cards.
Two Options: Chip + Pin and Chip + Signature
There are two ways that EMV cards are used. The first way is called Chip and Pin. In this scenario, a user has a numeric pin that has to be input when using the card in person. This is very similar to how a debit card works today. The requirement of the pin helps protect the card from being replicated by a bad actor, or used by someone that may have stolen the actual card from you. Of course this can be a tricky scenario when you have multiple credit cards when trying to remember the pin for each card.
The other way is called Chip and Sign. In this scenario, the card owner doesn’t have a pin, but rather is required to sign for the transaction. This is very similar to how most credit cards still work today. This is actually much easier to bypass because you may be forging just a signature, not trying to come up with a pin number.
Reduces Risk for Card Present Transactions
I mentioned earlier, this only adds security to the Card Present scenario. This doesn’t change anything for the Card Not Present (shopping online) scenario. Also, know that it will be a while before we see chip only cards. For the time being the newer cards will have both a chip and the magnetic stripe. This is because it is going to be a while before every vendor makes the switch to EMV. Keeping the magnetic stripe also reduces the security a bit since there will still be a lot of places that won’t upgrade and will still use the magnetic stripe.
For Consumers
The biggest change is that you will use your card by “dipping” it, rather than swiping for in person transactions. The goal is to reduce your card being replicated by a bad actor and having funds racked up on your account. Most credit cards are zero liability so this may or may not be very noticeable to most consumers. Hopefully it will at least cut down some of the card present fraud that happens. Only time will tell.
For Businesses
The liability is changing with the push for EMV cards. If you are not following the recommended approaches, such as requiring EMV vs. the magnetic stripe, you may find yourself liable for any fraudulent charges. Make sure that you are validating the cards and the card holder when accepting credit cards to help reduce your liability. Update to accepting the EMV chip data vs the magnetic stripe.
Final Thoughts
This won’t solve all of the problems, but it is a step in the right direction. The United States is one of the last countries to adopt the chip technology. Many of the credit cards will be chip and sign, but you may get a few that are chip and pin.
Remember, no matter what technologies get added to protect our credit cards, it is still your responsibility to monitor your statements and report any fraudulent charges. it is part of the responsibility of having a credit card.
F5 BigIP Decode with Fiddler
Filed under: Development, Testing
There are many tools out there that allow you to decode the F5 BigIP cookie used on some sites. I haven’t seen anything that just plugs into Fiddler if you use that for debugging purposes. One of the reasons you may want to decode the F5 cookie is just that, debugging. If you need to know what server, behind the load balancer, your request is going to to troubleshoot a bug, this is the cookie you need. I won’t go into a long discussion of the F5 Cookie, but you can read more about it I have a description here.
Most of the examples I have seen are using python to do the conversion. I looked for a javascript example, as that is what Fiddler supports in its Custom Rules but couldn’t really find anything. I got to messing around with it and put together a very rough set of functions to be able to decode the cookie value back to its IP address and Port. It sticks the result into a custom column in the Fiddler interface (scroll all the way to the right, the last column). If it identifies the cookie it will attempt to decode it and populate the column. This can be done for both response and request cookies.
To decode response cookies, you need to update the custom Fiddler rules by adding code to the static function OnBeforeResponse(oSession: Session) method. The following code can be inserted at the end of the function:
var re = /\d+\.\d+\.0{4}/; // Simple regex to identify the BigIP pattern. if (oSession.oResponse.headers) { for (var x:int = 0; x < oSession.oResponse.headers.Count(); x++) { if(oSession.oResponse.headers[x].Name.Contains("Set-Cookie")){ var cookie : Fiddler.HTTPHeaderItem = oSession.oResponse.headers[x]; var myArray = re.exec(cookie.Value); if (myArray != null && myArray.length > 0) { for (var i = 0; i < myArray.length; i++) { var index = myArray[i].indexOf("."); var val = myArray[i].substring(0,index); var hIP = parseInt(val).toString(16); if (hIP.length < 8) { var pads = "0"; hIP = pads + hIP; } var hIP1 = parseInt(hIP.toString().substring(6,8),16); var hIP2 = parseInt(hIP.toString().substring(4,6),16); var hIP3 = parseInt(hIP.toString().substring(2,4),16); var hIP4 = parseInt(hIP.toString().substring(0,2),16); var val2 = myArray[i].substring(index+1); var index2 = val2.indexOf("."); val2 = val2.substring(0,index2); var hPort = parseInt(val2).toString(16); if (hPort.length < 4) { var padh = "0"; hPort = padh + hPort; } var hPortS = hPort.toString().substring(2,4) + hPort.toString().substring(0,2); var hPort1 = parseInt(hPortS,16); oSession["ui-customcolumn"] += hIP1 + "." + hIP2 + "." + hIP3 + "." + hIP4 + ":" + hPort1 + " "; } } } } }
In order to decode the cookie from a request, you need to add the following code to the static function OnBeforeRequest(oSession: Session) method.
var re = /\d+\.\d+\.0{4}/; // Simple regex to identify the BigIP pattern. oSession["ui-customcolumn"] = ""; if (oSession.oRequest.headers.Exists("Cookie")) { var cookie = oSession.oRequest["Cookie"]; var myArray = re.exec(cookie); if (myArray != null && myArray.length > 0) { for (var i = 0; i < myArray.length; i++) { var index = myArray[i].indexOf("."); var val = myArray[i].substring(0,index); var hIP = parseInt(val).toString(16); if (hIP.length < 8) { var pads = "0"; hIP = pads + hIP; } var hIP1 = parseInt(hIP.toString().substring(6,8),16); var hIP2 = parseInt(hIP.toString().substring(4,6),16); var hIP3 = parseInt(hIP.toString().substring(2,4),16); var hIP4 = parseInt(hIP.toString().substring(0,2),16); var val2 = myArray[i].substring(index+1); var index2 = val2.indexOf("."); val2 = val2.substring(0,index2); var hPort = parseInt(val2).toString(16); if (hPort.length < 4) { var padh = "0"; hPort = padh + hPort; } var hPortS = hPort.toString().substring(2,4) + hPort.toString().substring(0,2); var hPort1 = parseInt(hPortS,16); oSession["ui-customcolumn"] += hIP1 + "." + hIP2 + "." + hIP3 + "." + hIP4 + ":" + hPort1 + " "; } } }
Again, this is a rough compilation of code to perform the tasks. I am well aware there are other ways to do this, but this did seem to work. USE AT YOUR OWN RISK. It is your responsibility to make sure any code you add or use is suitable for your needs. I am not liable for any issues from this code. From my testing, this worked to decode the cookie and didn't present any issues. This is not production code, but an example of how this task could be done.
Just add the code to the custom rules file and visit a site with a F5 cookie and it should decode the value.
Static Analysis: Analyzing the Options
Filed under: Development, Security, Testing
When it comes to automated testing for applications there are two main types: Dynamic and Static.
- Dynamic scanning is where the scanner is analyzing the application in a running state. This method doesn’t have access to the source code or the binary itself, but is able to see how things function during runtime.
- Static analysis is where the scanner is looking at the source code or the binary output of the application. While this type of analysis doesn’t see the code as it is running, it has the ability to trace how data flows the the application down to the function level.
An important component to any secure development workflow, dynamic scanning analyzes a system as it is running. Before the application is running the focus is shifted to the source code which is where static analysis fits in. At this state it is possible to identify many common vulnerabilities while integrating into your build processes.
If you are thinking about adding static analysis to your process there are a few things to think about. Keep in mind there is not just one factor that should be the decision maker. Budget, in-house experience, application type and other factors will combine to make the right decision.
Disclaimer: I don’t endorse any products I talk about here. I do have direct experience with the ones I mention and that is why they are mentioned. I prefer not to speak to those products I have never used.
Budget
I hate to list this first, but honestly it is a pretty big factor in your implementation of static analysis. The vast options that exist for static analysis range from FREE to VERY EXPENSIVE. It is good to have an idea of what type of budget you have at hand to better understand what option may be right.
Free Tools
There are a few free tools out there that may work for your situation. Most of these tools depend on the programming language you use, unlike many of the commercial tools that support many of the common languages. For .Net developers, CAT.Net is the first static analysis tool that comes to mind. The downside is that it has not been updated in a long time. While it may still help a little, it will not compare to many of the commercial tools that are available.
In the Ruby world, I have used Brakeman which worked fairly well. You may find you have to do a little fiddling to get it up and running properly, but if you are a Ruby developer then this may be a simple task.
Managed Services or In-House
Can you manage a scanner in-house or is this something better delegated to a third party that specializes in the technology?
This can be a difficult question because it may involve many facets of your development environment. Choosing to host the solution in-house, like HP’s Fortify SCA may require a lot more internal knowledge than a managed solution. Do you have the resources available that know the product or that can learn it? Given the right resources, in-house tools can be very beneficial. One of the biggest roadblocks to in-house solutions is related to the cost. Most of them are very expensive. Here are a few in-house benefits:
- Ability to integrate directly into your Continuous Integration (CI) operations
- Ability to customize the technology for your environment/workflow
- Ability to create extensions to tune the results
Choosing to go with a managed solution works well for many companies. Whether it is because the development team is small, resources aren’t available or budget, using a 3rd party may be the right solution. There is always the question as to whether or not you are ok with sending your code to a 3rd party or not, but many are ok with this to get the solution they need. Many of the managed services have the additional benefit of reducing false positives in the results. This can be one of the most time consuming pieces of a static analysis tool, right there with getting it set up and configured properly. Some scans may return upwards of 10’s of thousands of results. Weeding through all of those can be very time consuming and have a negative effect on the poor person stuck doing it. Having a company manage that portion can be very beneficial and cost effective.
Conclusion
Picking the right static analysis solution is important, but can be difficult. Take the time to determine what your end goal is when implementing static analysis. Are you looking for something that is good, but not customizable to your environment, or something that is highly extensible and integrated closely with your workflow? Unfortunately, sometimes our budget may limit what we can do, but we have to start someplace. Take the time to talk to other people that have used the solutions you are looking at. Has their experience been good? What did/do they like? What don’t they like? Remember that static analysis is not the complete solution, but rather a component of a solution. Dropping this into your workflow won’t make you secure, but it will help decrease the attack surface area if implemented properly.
A Pen Test is Coming!!
Filed under: Development, Security, Testing
You have been working hard to create the greatest app in the world. Ok, so maybe it is just a simple business application, but it is still important to you. You have put countless hours of hard work into creating this master piece. It looks awesome, and does everything that the business has asked for. Then you get the email from security: Your application will undergo a penetration test in two weeks. Your heart skips a beat and sinks a little as you recall everything you have heard about this experience. Most likely, your immediate action is to go on the defensive. Why would your application need a penetration test? Of course it is secure, we do use HTTPS. No one would attack us, we are small. Take a breath.. it is going to be alright.
All too often, when I go into a penetration test, the developers start on the defensive. They don’t really understand why these ‘other’ people have to come in and test their application. I understand the concerns. History has shown that many of these engagements are truly considered adversarial. The testers jump for joy when they find a security flaw. They tell you how bad the application is and how simple the fix is, leading to you feeling about the size of an ant. This is often due to a lack of good communication skills.
Penetration testing is adversarial. It is an offensive assessment to find security weaknesses in your systems. This is an attempt to simulate an attacker against your system. Of course there are many differences, such as scope, timing and rules, but the goal is the same. Lets see what we can do on your system. Unfortunately, I find that many testers don’t have the communication skills to relay the information back to the business and developers in a way that is positive. I can’t tell you how may times I have heard people describe their job as great because they get to come in, tell you how bad you suck and then leave. If that is your penetration tester, find a new one. First, that attitude breaks down the communication with the client and doesn’t help promote a secure atmosphere. We don’t get anywhere by belittling the teams that have worked hard to create their application. Second, a penetration test should provide solid recommendations to the client on how they can work to resolve the issues identified. Just listing a bunch of flaws is fairly useless to a company.
These engagements should be worth everyone’s time. There should be positive communication between the developers and the testing team. Remember that many engagements are short lived so the more information you can provide the better the assessment you are going to get. The engagement should be helpful. With the right company, you will get a solid assessment and recommendations that you can do something with. If you don’t get that, time to start looking at another company for testing. Make sure you are ready for the test. If the engagement requires an environment to test in, have it all set up. That includes test data (if needed). The testers want to hit the ground running. If credentials are needed, make sure those are available too. The more help you can be, the more you will benefit from the experience.
As much as you don’t want to hear it, there is a very high chance the test will find vulnerabilities. While it would be great if applications didn’t have vulnerabilities, it is fairly rare to find them. Use this experience to learn and train on security issues. Take the feedback as constructive criticism, not someone attacking you. Trust me, you want the pen testers to find these flaws before a real attacker does.
Remember that this is for your benefit. We as developers also need to stay positive. The last thing you want to do is challenge the pen testers saying your app is not vulnerable. The teams that usually do that are the most vulnerable. Say positive and it will be a great learning experience.
Are Application Security Certifications Worth It?
Filed under: Security
In the IT industry there has always been a debate for and against certifications. This is no different than the age old battle of whether or not a bachelors degree is needed to be good in IT. There are large entities that have made a really good profit off the certification tracks. Not only do you have the people that create the tests, but also all of the testing centers. It is a pretty lucrative business if your cert is popular.
I remember when I first started developing applications there were certifications like the Microsoft Certified series or Sun certifications. Anyone remember doing the BrainBench tests online? The goal was to indicate that you had some base level of knowledge about that technology. This seemed to work for a technology, but so far it doesn’t seem to be catching on in the development world for secure development certifications.
You haven’t heard? There are actually certifications that try to show some expertise in application security. GIAC has a secure coding program for both Java and .Net, both leading to the GSSP certification. ISC2 has the CSSLP certification focused at those that work with developing applications. They don’t feel that wide spread though. Lets look at these two examples.
The GIAC certification focuses mostly on the developer and writing secure code. This is tough because it is a certification for a portion of your job as a developer. Your main goal is writing code so to take the effort to go out and get a certification that is so focused can be deterring, never mind the cost of these certs these days. The other issue is that we are not seeing a wide acceptance in the industry for these certifications. I have not seen many job postings for developers that look for the GSSP, or CSSLP certification or any other secure coding cert. You might see MCP or MCSD, but not security certs. Until we start looking for these in our candidates, there is no reason for developers to take the time to get them.
The ISC2 CSSLP certification is geared less at secure coding, and focused more toward the entire SDLC. This alone may make it even less interesting to a developer to attain because it is not directly related to coding. Sure we are involved in the SDLC, but do we really want some cert that says we are security conscious? I am not saying that certifications are a bad thing. I think they can help show some competence, but there seem to be a lot of barriers to adoption within the developer community with security certifications.
When you look at other security certifications they are more job direct, or encompassing. For example, the Web Application Penetration Tester certifications that are available encompass a role: Web Penetration Tester. In our examples above, there is no GSSP role for a developer.
How do we go about solving the problem? Is there a certification that could actually be broadly adopted in the developer world? Rather than have a separate security certification, should we expect that the other developer certifications would incorporate security? Just because I have the GSSP doesn’t mean I can actually write good programs with no flaws. Would I be more marketable if I had the MCSD and everyone knew that that required secure coding expertise?
Push the major developer certification creators to start requiring more secure coding coverage. We shouldn’t need an extra certification for application security, it should just be a part of what we do every day.
Application Logging: The Next Great Wonder
Filed under: Security
What type of logging do you perform in your applications? Do you just log exceptions? Many places I have worked and many developers I have talked to over the years mostly focus on logging troubleshooting artifacts. Where did the application break, and what may have caused it. We do this because we want to be able to fix the bugs that may crop up that cause our users difficulty in using the application.
While this makes sense to many developers, because it is directly related to the pain the face in troubleshooting, it leaves a lot to be desired. When we think about a security perspective, there is much more that should be considered. The simplest events are successful and unsuccessful authentication attempts. Many developers will say they log the first, but the latter is usually overlooked. In reality, the failed attempts are logged most likely to help with account lockout and don’t server much other purpose. But they do. Those logs can be used to identify brute force attacks against a user’s account.
Other events that are critical include logoff events, password change events and even the access of sensitive data. Not many days go buy that we don’t see word of a breach of data. If your application accesses sensitive data, how do you know who has looked at it? If records are meant to be viewed one at a time, but someone starts pulling hundreds at a time, would you notice? If a breach occurs, are you able to go back into the logs and show what data has been viewed and by who?
Logging and auditing play a critical role in an application and finding the right balance of data stored is somewhat an art. Some people may say that you need to just grab everything. That doesn’t always work. Performance seems to be the first concern that comes to mind. I didn’t say it would be easy to throw a logging plan together.
You have to understand your application and the business that it supports. Information and events that are important to one business may not be as important in another business. That is ok. This isn’t a one-size-fits-all solution. Take the time to analyze your situation and log what feels right. But more thought into it than just troubleshooting. Think about if a breach occurs how you will use that stored data.
In addition to logging the data, there needs to be a plan in place to look at that data. Whether it is an automated tool, or manual (hopefully a mix of the two) you can’t identify something if you don’t look. All too often we see breaches occur and not be noticed for months or even years afterward. In many of these cases if someone had just been looking at the logs, it would have been identified immediately and the risk of the breach could be minimized.
There are tools out there to help with logging in your application, no matter what your platform of choice is. Logging is not usually a bolt on solution, you have to be thinking about it before you build your application. Take the time up front to do this so when something happens, you have all the data you need to protect yourself and your customers.
Future of ViewStateMac: What We Know
Filed under: Development, Security, Testing
The .Net Web Development and Tools Blog just recently posted some extra information about ASP.Net December 2013 Security Updates (http://blogs.msdn.com/b/webdev/archive/2013/12/10/asp-net-december-2013-security-updates.aspx).
The most interesting thing to me was a note near the bottom of the page that states that the next version of ASP.Net will FORBID setting ViewStateMac=false. That is right.. They will not allow it in the next version. So in short, if you have set it to false, start working out how to work it to true before you update.
So why forbid it? Apparently, there was a Remote Code Execution flaw identified that can be exploited when ViewStateMac is disabled. They don’t include a lot of details as to how to perform said exploit, but that is neither here nor there. It is about time that something was critical enough that they have decided to take this property out of the developer’s hands.
Over the years I have written many posts discussing attacking ASP.Net sites, many of which rely on ViewStateMac being disabled. I have written to Microsoft regarding how EventValidation can be manipulated if ViewStateMac is disabled. The response was that they think developers should be using the secure settings. I guess that is different now that there is remote code execution. We have reached a new level.
So what does ViewStateMac protect? There are three things that I am aware of that it protects (search this site for any of these and you will find articles with much more detail):
- ViewState – protects this from parameter tampering
- EventValidation – protects this from parameter tampering
- ViewStateUserKey – Used to protect requests from CSRF
So why do developers disable ViewStateMac? Great question. I believe that in most cases, it is disabled because the application is deployed in a web farm and when the web.config is not configured properly, an error is thrown. When some developers search for the error, many forums recommend disabling the ViewStateMac to fix the problem. Unfortunately, that is WRONG!!. Here is a Microsoft KB article that explains in detail how to properly configure a system to allow ViewStateMac to be enabled (http://support.microsoft.com/kb/2915218).
Is this a good thing? For developers, yes!. This will definitely help increase the protection for ViewState, EventValidation and CSRF if ViewStateUserKey is set. For Penetration Testers, Yes/No. Yes, because we get to say you are doing a good job in this category. No, because some easy pickings are going to be wiped off the plate.
I think this is a pretty bold move by Microsoft to remove control over this, but I do think it is a good thing. This is an important control in the WebForm ecosystem and all too often mis-understood by developers. This should bring many sites one step closer to being a little more secure when this change rolls out.
ViewStateUserKey: ViewStateMac Relationship
Filed under: Development, Security, Testing
I apologize for the delay as I recently spoke about this at the SANS Pen Test Summit in Washington D.C. but haven’t had a chance to put it into a blog. While I was doing some research for my presentation on hacking ASP.Net applications I came across something very interesting that sort of blew my mind. One of my topics was ViewStateUserKey, which is a feature of .Net to help protect forms from Cross-Site Request Forgery. I have always assumed that by setting this value (it is off by default) that it put a unique key into the view state for the specific user. Viewstate is a client-side storage mechanism that the form uses to help maintain state.
I have a previous post about ViewStateUserKey and how to set it here: https://jardinesoftware.net/2013/01/07/asp-net-and-csrf/
While I was doing some testing, I found that my ViewState wasn’t different between users even though I had set the ViewStateUserKey value. Of course it was late at night.. well ok, early morning so I thought maybe I wasn’t setting it right. But I triple checked and it was right. Upon closer inspections, my view state was identical between my two users. I was really confused because as I mentioned, I thought it put a unique value into the view state to make the view state unique.
My Problem… ViewStateMAC was disabled. But wait.. what does ViewStateMAC have to do with ViewStateUserKey? That is what I said. So I started digging in with Reflector to see what was going on. What did I find? The ViewStateUserKey is actually used to modify the ViewStateMac modifier. It doesn’t store a special value in the ViewState.. rather it modifies how the MAC is generated to protect thew ViewState from Parameter Tampering.
So this does work*. If the MAC is different between users, then the ViewState is ultimately different and the attacker’s value is different from the victim’s. When the ViewState is submitted, the MAC’s won’t match which is what we want.
Unfortunately, this means we are relying again on ViewStateMAC being enabled. Don’t get me wrong, I think it should be enabled and this is yet another reason why. Without it, it doesn’t appear that the ViewStateUserKey doesn’t anything. We have been saying for the longest time that to protect against CSRF set the ViewStateUserKey. No one has said it relies on ViewStateMAC though.
To Recap.. Things that rely on ViewStateMAC:
- ViewState
- Event Validation
- ViewStateUserKey
It is important that we understand the framework features as disabling one item could cause a domino effect of other items. Be secure.
Bounties For Fixes
It was just recently announced that Google will pay for open-source code security fixes (http://www.computerworld.com/s/article/9243110/Google_to_pay_for_open_source_code_security_fixes). Paying for stuff to happen is nothing new, we have seen Bug Bounty programs popping up in a lot of companies. The idea behind the bug bounty is that people can submit bugs they have found and then possibly get paid for that bug. This has been very successful for some large companies and some bug finders out there.
The difference in this new announcement is that they are paying for people to apply fixes to some open source tools that are widely used. I personally think this is a good thing because it will encourage people to actually start fixing some of the issues that exist. Security is usually bent on finding vulnerabilities, which doesn’t really help fix security at all. It still requires the software developers to implement some sort of change to get that security hole plugged. Here, we see that the push to fix the problem is now being rewarded. This is especially true in open-source projects as many of the people that work on these projects do so voluntarily.
Is there any concern though that this process could be abused? The first thought that comes to mind is people working together where one person plants the bug and the other one fixes it. Not sure how realistic that is, but I am sure there are people thinking about it. What could possibly be more challenging is verifying the fixes. What happens if someone patches something, but they do it incorrectly? Who is testing the fix? How do they verify that it is really fixed properly? If they find later that the fix wasn’t complete, does the fixer have to return the payment? There are always questions to be answered when we look at a new program like this. I am sure that Google has thought about this before rolling it out and I really hope the program works out well. it is a great idea and we need to get more people involved in helping fix some of these issues.