Howdy folks! I would like to assure my audience that SecuRelevance is alive and well. It has been 5 months since my last post due to some family and job priorities taking precedence. My goal is still, and has always been, to research and post topics every two weeks. I believe that I am finally in a position to do that again.
With that said, let's start the first post for 2015.
I was doing some research tonight when a thought occurred to me. Many applications have shifted to front end web applications and back end databases over what I would guesstimate to be about the past eight to ten years. As I recall it was approximately the first 3 or 4 years of my career that most applications in use relied on a desktop console, so I would imagine this timing to be about right.
I've heard a lot of hullabaloo over web applications and security with regards to things like Cross-Site Scripting (XSS), Cross-Site Forgery (CRSF), Input Validation, and SQL Injection to name a few. But one area that just dawned on me is the underlying architecture that so much of a web application is dependent on - things like .NET, or the languages and their libraries themselves (e.g. - C#, C++, XML, or Java). What about the webserver itself? IIS, Apache, or Tomcat to name the most popular selections. There are a lot of underlying parts that are susceptible to compromise. That brings me to my next point.
Now that we have identified that there are many underlying parts to the architecture, what about the lifecycle of a web application? In so many instances of web applications we see where they were developed at a point in time, and as such have been hardcoded for specific versions of things like, probably the biggest offender, java. In some other cases the product is limited to what was available at the time of its development. Because of this it lacks any compatibility with future releases of things like .NET upgrades, for example, which roll-up multiple security flaws, until such time that the applications maintenance life-cycle addresses it. This incompatibility forces the system to remain vulnerable until such time that a patch can be developed and released within the lifecycle, which forces a more controlled release of a remediation program because of the potential adverse effect on availability. If it is known incompatible, nobody wants a broken web server, especially not one performing a critical business function. Hmmm....now if I'm the hacker which do I want to attack?
Some would argue that a centralized patching and remediation program would cure these issues once a patch is released. However, in my experience I have noted that most organizations that have switched to centralized patch management for efficiency and cost savings, also suffer from the trade offs or consequences. A program with a good patch management system would probably not be susceptible to this right? Or would they...? What I have witnessed is the birth of a business culture that relies heavily on the central remediation, but does not necessarily check up on the agent health of the centralized solution. Perhaps the most dangerous part of this culture is that most organizations are geographically separated without much representation to support the centralized services. Even a mature patch management process would struggle in this environment.
Now let's look at a similar, yet different aspect of patch management. It's what most like to call the rack & stack effect. Management never wants downtime of a critical system. So what are your options? Well, if you have built in failovers, clustering, or other mechanisms of the like built in you could remediate each system independently, but if you cannot do that, then you are stuck with a server that is vulnerable until its patching window becomes available. The most common schedules that I've seen for the racking and stacking of patches ranges from 1 month to quarterly.
Now what I want you to do is think about these last two paragraphs - the pitfalls of centralized patching coupled with the inability to patch critical servers on demand. Now I want you to couple those two possibilities with all of the underlying framework architecture associated to a web application and ask yourself......Am I vulnerable?
Now sure there are still several additional measures of security at play here. For example a reverse proxy is probably in use to protect the webserver as are firewall rules and internally limited to specific trusted hosts even. A good defense in depth program would certainly thwart or mitigate the potential for compromise, but never-the-less I wanted to implant the thought into your heads and get you thinking that perhaps the best approach is a holistic one.
The goal of SecuRelevance is to provide information and viewpoints on topics that are currently plaguing information security. Most posts herein are targeted specifically toward information security professionals in order to inform and provoke thought. Some posts however will be geared toward the general public to address newsworthy security events.
Saturday, February 21, 2015
Friday, September 12, 2014
Risk Management Frameworks
Risk Management is receiving much more emphasis in security today than ever before. Risk Management certainly is nothing new, and has existed in many different forms over the years. I remember when I was in the Navy around 1997 I was first introduced to Risk Management in the form of ORM - Operational Risk Management. Having recognized this trend, I decided to study and test for the ISACA CRISC certification this December. You can find the basics of Risk Management through my blog, but this is a much deeper subject than what I have posted. To completely understand Risk Management an intense study is required. Just to put that into perspective a little bit, the CRISC manual is ~400 pages.
ERM is probably the most popular form of Risk Management. Various models exist for ERM such as CAS, COSO, and OSI's ISO 31000 and 31010. Tools like OCTAVE, developed by Carnegie Mellon and released in 2001, are also useful in identifying risks as well as the FISMA Risk Management Framework.
ERM is also being imposed upon companies by U.S. law in several cases. Sarbanes Oxley is just one such case and requires risk management in support of identifying fraud and fraudulent transactions. Even setting legal requirements aside however businesses are seeing increased value in earlier identification of risks and implementing risk management frameworks.
Risk can exist on many levels. There is overall risk to an organization as a whole, which may or may not be shared in common with each division, department, or work center. Because of this it is important that all risks are identified, cataloged, and reviewed regularly.
Risks are constantly changing which makes them dynamic and hard to track. Because of this risk must be evaluated regularly. Annual or semi-annual reviews should be conducted. Risks should also be evaluated immediately upon any significant change. This could be changes in market conditions, changes to an IT system or infrastructure, moving offices, and a variety of other possibilities.
Perhaps the most beautiful thing about Risk Management is its versatility. The topic itself is very broad and applies to many things while the principals used are always the same. One visit to RIMS and you will easily see that risks vary broadly from credit ratings, to terrorist threats, to newly identified software threats, and even risks associated to various processes. I think it is safe to say that Risk Management is here to stay, is still in its infancy even though it's been around a while, and represents a promising career for anyone interested.
ERM is probably the most popular form of Risk Management. Various models exist for ERM such as CAS, COSO, and OSI's ISO 31000 and 31010. Tools like OCTAVE, developed by Carnegie Mellon and released in 2001, are also useful in identifying risks as well as the FISMA Risk Management Framework.
ERM is also being imposed upon companies by U.S. law in several cases. Sarbanes Oxley is just one such case and requires risk management in support of identifying fraud and fraudulent transactions. Even setting legal requirements aside however businesses are seeing increased value in earlier identification of risks and implementing risk management frameworks.
Risk can exist on many levels. There is overall risk to an organization as a whole, which may or may not be shared in common with each division, department, or work center. Because of this it is important that all risks are identified, cataloged, and reviewed regularly.
Risks are constantly changing which makes them dynamic and hard to track. Because of this risk must be evaluated regularly. Annual or semi-annual reviews should be conducted. Risks should also be evaluated immediately upon any significant change. This could be changes in market conditions, changes to an IT system or infrastructure, moving offices, and a variety of other possibilities.
Perhaps the most beautiful thing about Risk Management is its versatility. The topic itself is very broad and applies to many things while the principals used are always the same. One visit to RIMS and you will easily see that risks vary broadly from credit ratings, to terrorist threats, to newly identified software threats, and even risks associated to various processes. I think it is safe to say that Risk Management is here to stay, is still in its infancy even though it's been around a while, and represents a promising career for anyone interested.
Friday, July 18, 2014
Password Security
Tonight I was just reading this article. First, let me state that while I disagree with this article, I do see their point. I just happen to think that leaving it up to individuals to discern between which sites to use/not use "throw away" passwords on is risky. I'm also of the belief that relaxing this posture will create a lazy culture. Reading this made me wonder what would be worse, taking an approach like this, utilizing 1-5 complex passwords across multiple sites, or using some sort of password wallet software? Maybe there are other scenarios, but these are the ones that quickly came to mind during this 8 second popcorn thought. We really need to just go ahead an make the transition away from passwords.
Vulnerability Scanning: Identifying False Positives
It's been a while, but I appreciate the patience of my audience during these trying times.
I want to talk about vulnerability scanning today and false positives because I see so many misunderstandings and people addressing this incorrectly. I regularly hear popular opinions that vulnerability scanners produce above a 90% false positive rate. First off let's address a common misunderstanding between security personnel and administrators. Often times the security administrator wants the admin to relax certain security settings so that the scan will get every possible result. Why? This seems kind of backwards to the admin. Why would you want to relax the security settings? Because we are interested in finding every possible patch or configuration that needs to be fixed. Let's be honest, if I'm the Blackhat and I'm effectively attacking your network, I'm going to be looking for every possible attack vector. Some of these the administrators may "think" they have mitigated, but perhaps their mitigation is ineffective resulting in a control gap. What I said in the previous sentence is really the keyword too, mitigated. The flaw is still there, just better masked. Allowing the vulnerability scanner to detect every vulnerability possible is a greater mitigation, or at least can lead to greater mitigation. Also when we scan, we want accuracy. An open network reduces the opportunity to generate false positives.
False Positives! This is a biggie! In vulnerability scanning, a false positive is when the vulnerability scanner returns a finding when in fact that finding is not there, or in other words a vulnerability has been detected when in fact patches, configurations, or mitigations that address the vulnerability are already in place. Sounds fairly straight forward right? Well, it's not. This is an area that is regularly misdiagnosed because only the truest security professionals actually know how to properly diagnose it. For example, let's say we generate a report for a windows system and hand it off to an engineer or system administrator. They are likely to glance at the report, see an MS article, lookup up the associated KB article, and see if it is installed in Programs (Add/Remove Programs). Sounds logical, but there may be more to it. Much more in fact. Let's say that a patch deploys a new dynamic link library (dll) file. The newly installed .dll file may not be vulnerable, but if the old .dll file wasn't removed during installation, that system could still be vulnerable. Why? The answer can be found by asking what is a .dll file. In the most simplest of terms, a .dll file creates modularity by providing common functions through a single avenue. This means that if the .dll file is present, it can be called by malicious code. So, yes, maybe whatever app was using the .dll isn't vulnerable any longer since it points to the latest, but the vulnerable .dll is still present and thus vulnerable. If it is there it can be called, which makes it vulnerable.
This isn't the only mistake I've seen in identifying false positives. Another common mistake comes from networks with short DHCP leases or removable hard drives (classified areas). In practice I have seen very well trained engineers dismiss vulnerability scanning reports simply because either a lease expired and the IP can no longer be identified. In the case of removable hard drives I've witnessed administrators ping for the IP listed in the report, and close the finding when the system is unreachable. In the case of removable hard drives, I understand that there are certain circumstance to overcome, but typically from what I've seen removable hard drive environments are a 1:1. That is to say that the hard disks are labeled to be for a specific system.
I've been conducting vulnerability scans for about nine years of my eleven years in information security. During that time the one thing I hear regardless of the site is that vulnerability scanner findings are 90-99% false positives. In practice, I'd be lying if I said I have never seen a false positive, but the fact of the matter is that if you know what you are looking for, the vulnerability scanner is typically correct.
Think about it for a second. The programmers that are writing these scanner rules are not throwing darts in the air and hoping they hit something. They are writing their code based off of requirements, most likely coming from CVE, BID, or vendor KB that identify what to specifically check. Then they automate that check and release in an update or in the latest engine. When you really take the time to break down a vulnerability scanner check, understand how the technology works, and compare the findings against CVE, BID, or vendor KB the finding is typically correct, and thus represent a vulnerability.
In conclusion, BEWARE --> BE KNOWLEDGEABLE --> BE THOROUGH!
I want to talk about vulnerability scanning today and false positives because I see so many misunderstandings and people addressing this incorrectly. I regularly hear popular opinions that vulnerability scanners produce above a 90% false positive rate. First off let's address a common misunderstanding between security personnel and administrators. Often times the security administrator wants the admin to relax certain security settings so that the scan will get every possible result. Why? This seems kind of backwards to the admin. Why would you want to relax the security settings? Because we are interested in finding every possible patch or configuration that needs to be fixed. Let's be honest, if I'm the Blackhat and I'm effectively attacking your network, I'm going to be looking for every possible attack vector. Some of these the administrators may "think" they have mitigated, but perhaps their mitigation is ineffective resulting in a control gap. What I said in the previous sentence is really the keyword too, mitigated. The flaw is still there, just better masked. Allowing the vulnerability scanner to detect every vulnerability possible is a greater mitigation, or at least can lead to greater mitigation. Also when we scan, we want accuracy. An open network reduces the opportunity to generate false positives.
False Positives! This is a biggie! In vulnerability scanning, a false positive is when the vulnerability scanner returns a finding when in fact that finding is not there, or in other words a vulnerability has been detected when in fact patches, configurations, or mitigations that address the vulnerability are already in place. Sounds fairly straight forward right? Well, it's not. This is an area that is regularly misdiagnosed because only the truest security professionals actually know how to properly diagnose it. For example, let's say we generate a report for a windows system and hand it off to an engineer or system administrator. They are likely to glance at the report, see an MS article, lookup up the associated KB article, and see if it is installed in Programs (Add/Remove Programs). Sounds logical, but there may be more to it. Much more in fact. Let's say that a patch deploys a new dynamic link library (dll) file. The newly installed .dll file may not be vulnerable, but if the old .dll file wasn't removed during installation, that system could still be vulnerable. Why? The answer can be found by asking what is a .dll file. In the most simplest of terms, a .dll file creates modularity by providing common functions through a single avenue. This means that if the .dll file is present, it can be called by malicious code. So, yes, maybe whatever app was using the .dll isn't vulnerable any longer since it points to the latest, but the vulnerable .dll is still present and thus vulnerable. If it is there it can be called, which makes it vulnerable.
This isn't the only mistake I've seen in identifying false positives. Another common mistake comes from networks with short DHCP leases or removable hard drives (classified areas). In practice I have seen very well trained engineers dismiss vulnerability scanning reports simply because either a lease expired and the IP can no longer be identified. In the case of removable hard drives I've witnessed administrators ping for the IP listed in the report, and close the finding when the system is unreachable. In the case of removable hard drives, I understand that there are certain circumstance to overcome, but typically from what I've seen removable hard drive environments are a 1:1. That is to say that the hard disks are labeled to be for a specific system.
I've been conducting vulnerability scans for about nine years of my eleven years in information security. During that time the one thing I hear regardless of the site is that vulnerability scanner findings are 90-99% false positives. In practice, I'd be lying if I said I have never seen a false positive, but the fact of the matter is that if you know what you are looking for, the vulnerability scanner is typically correct.
Think about it for a second. The programmers that are writing these scanner rules are not throwing darts in the air and hoping they hit something. They are writing their code based off of requirements, most likely coming from CVE, BID, or vendor KB that identify what to specifically check. Then they automate that check and release in an update or in the latest engine. When you really take the time to break down a vulnerability scanner check, understand how the technology works, and compare the findings against CVE, BID, or vendor KB the finding is typically correct, and thus represent a vulnerability.
In conclusion, BEWARE --> BE KNOWLEDGEABLE --> BE THOROUGH!
Subscribe to:
Posts
(
Atom
)