Howdy folks! I would like to assure my audience that SecuRelevance is alive and well. It has been 5 months since my last post due to some family and job priorities taking precedence. My goal is still, and has always been, to research and post topics every two weeks. I believe that I am finally in a position to do that again.
With that said, let's start the first post for 2015.
I was doing some research tonight when a thought occurred to me. Many applications have shifted to front end web applications and back end databases over what I would guesstimate to be about the past eight to ten years. As I recall it was approximately the first 3 or 4 years of my career that most applications in use relied on a desktop console, so I would imagine this timing to be about right.
I've heard a lot of hullabaloo over web applications and security with regards to things like Cross-Site Scripting (XSS), Cross-Site Forgery (CRSF), Input Validation, and SQL Injection to name a few. But one area that just dawned on me is the underlying architecture that so much of a web application is dependent on - things like .NET, or the languages and their libraries themselves (e.g. - C#, C++, XML, or Java). What about the webserver itself? IIS, Apache, or Tomcat to name the most popular selections. There are a lot of underlying parts that are susceptible to compromise. That brings me to my next point.
Now that we have identified that there are many underlying parts to the architecture, what about the lifecycle of a web application? In so many instances of web applications we see where they were developed at a point in time, and as such have been hardcoded for specific versions of things like, probably the biggest offender, java. In some other cases the product is limited to what was available at the time of its development. Because of this it lacks any compatibility with future releases of things like .NET upgrades, for example, which roll-up multiple security flaws, until such time that the applications maintenance life-cycle addresses it. This incompatibility forces the system to remain vulnerable until such time that a patch can be developed and released within the lifecycle, which forces a more controlled release of a remediation program because of the potential adverse effect on availability. If it is known incompatible, nobody wants a broken web server, especially not one performing a critical business function. Hmmm....now if I'm the hacker which do I want to attack?
Some would argue that a centralized patching and remediation program would cure these issues once a patch is released. However, in my experience I have noted that most organizations that have switched to centralized patch management for efficiency and cost savings, also suffer from the trade offs or consequences. A program with a good patch management system would probably not be susceptible to this right? Or would they...? What I have witnessed is the birth of a business culture that relies heavily on the central remediation, but does not necessarily check up on the agent health of the centralized solution. Perhaps the most dangerous part of this culture is that most organizations are geographically separated without much representation to support the centralized services. Even a mature patch management process would struggle in this environment.
Now let's look at a similar, yet different aspect of patch management. It's what most like to call the rack & stack effect. Management never wants downtime of a critical system. So what are your options? Well, if you have built in failovers, clustering, or other mechanisms of the like built in you could remediate each system independently, but if you cannot do that, then you are stuck with a server that is vulnerable until its patching window becomes available. The most common schedules that I've seen for the racking and stacking of patches ranges from 1 month to quarterly.
Now what I want you to do is think about these last two paragraphs - the pitfalls of centralized patching coupled with the inability to patch critical servers on demand. Now I want you to couple those two possibilities with all of the underlying framework architecture associated to a web application and ask yourself......Am I vulnerable?
Now sure there are still several additional measures of security at play here. For example a reverse proxy is probably in use to protect the webserver as are firewall rules and internally limited to specific trusted hosts even. A good defense in depth program would certainly thwart or mitigate the potential for compromise, but never-the-less I wanted to implant the thought into your heads and get you thinking that perhaps the best approach is a holistic one.