Monday, December 29, 2008

Brian Chess, CTO of Fortify Software - Creating Confusion

So this entry goes to support my previous post about Insecure Security Technologies and some of the confusion that these vendors can cause. Recently Networkworld published an article named "Penetration Testing: Dead in 2009" and cited Brian Chess, the CTO of Fortify Software as the expert source. 

The first thing that I want to point out is that Brian Chess is creating confusion amongst the non-expert people who read the article linked above.  The laymen might actually think that Penetration Testing is going to be dead in 2009 and as a result might decide to buy technology as a replacement for the service.  Well, before you make that mistake read this entire entry. I'll give you facts (not dreamy opinions) about why Penetration Testing is required and why its here to stay.

As a side note, Brian Chess has a vested interest in perpetrating this fantasy because his objective is first and foremost to sell you his technology.  

Technology, like Brian Chess's technology is a solution to a problem, which by definition means that the problem came first and the technology was always a few steps behind.  With respect to IT Security, hackers are always creating new methods for penetrating into networks (the problem). Because those methods of attack are new, the technology is not able to defeat them (because the solution doesn't yet exist). So if technology can't protect you, then how do you protect yourself?

The best way to protect yourself is to use a combination of technology (to solve known problems) and Penetration Testing (to identify the unknown). A properly executed penetration test will reproduce the same or greater threat levels that your infrastructure will likely face in the real world.  This is akin to testing the armor of the M1A2 tank.  You shoot the armor with RPG's and armor piercing rounds so that you can study the impact and improve the armor to the point where it defeats the threat.  As a result Penetration Testing can move your security posture well past the limits of what technological solutions have to offer.  My professional recommendation is that both Technology and Penetration Testing should be used.  Sorry Mr Chess, but telling people that Penetration Testing will be dead by 2009 is just fiction. 

Moving on...

As a general rule of thumb I try to avoid saying that anything is 100% secure or invulnerable to attack because that sort of claim is impossible.  But while reviewing the Fortify website I found the following text and thought it was worthy of note: "Fortify 360 renders software invulnerable to attacks from cyber predators." This sort of marketing fluff falls under the same class of confusing noise as Brian Chess's claim that Penetration Testing will be dead by 2009, total fiction.  It is mathematically  impossible for Fortify 360 to render software "invulnerable to attacks from cyber predators." unless the software is mathematically proven to be secure, and it hasn't.  

If anyone disagrees with what I've said here by all means leave me a comment. If you can prove me wrong then I'll happily make corrections, but I'm pretty sure I'm on the ball with this one.   And Mr. Chess, if you think that your technology renders your customers "invulnerable to attacks from cyber predators" then I challenge you to let my research team test an evaluation copy of your technology, after all the skills that we posses according to you are outdated and shouldn't pose a threat to your software.  ;]

Tuesday, December 23, 2008

Insecure *Security* Technologies

There is not a single piece of software that exists today that is free from flaws and many of those flaws are security risks. Every time a new security technology is added to an Infrastructure, a host of flaws are also introduced.  The majority of these flaws are undiscovered but in some cases the vendor already knows about them.

As an example, we encountered a Secure Email Gateway during an Advanced External Penetration Test for a customer. When a user sends an email, the email can either be sent from the gateway's webmail gui, or from outlook.  If it is sent from outlook then the gateway will intercept the email and store the message contents locally. Then instead of actually sending the sensitive email message to the recipient, the gateway sends a link to the recipient. When the recipient clicks on the link their browser launches and they are able to access the original message content.

While this all looked fine, there was something about that gateway that made me want to learn more (a strange jboss version response), so I did... I called the vendor and ask to speak to a local sales rep.  When the rep got on the phone I told him that I had an immediate need for 50 gateways but wouldn't make any purchases until I knew that his technology was compatible with my infrastructure. He got really excited and asked me what I needed in order to verify compatibility. I told the rep that I needed a list of all Open Source libraries and software that had been built into the gateway along with version information.  The rep said that he didn't really understand what I was asking him but that he'd go to someone in development and figure it out.  Within about fifteen minutes I received an email with a .xls attachment.  Shortly after that I received an email from the rep asking me to delete the .xls attachment because he wasn't supposed to share that particular one.... go figure... 

(I deleted it after I read it)

When I studied the document I realized that the gateway was nothing more than a common bloated linux box with a bunch of very, very old Open Source software installed on it.  In fact, based on the version information provided, the newest package that was installed was OpenSSL and that was 3 years old!  The JBoss application sever was even older than that and was also vulnerable as hell (but it was hacked and reported incorrect version information). Needless to say we managed to penetrate the secure email gateway by using a published exploit that was also about 3 years old. Once we got in our client decided that their secure gateway wasn't so secure any more and did away with it.  We did contact the vendor by the way and they weren't receptive or willing to commit to any sort of fix. 

The fact of the matter is that we run into technology like this all the time, especially with appliances.  We've seen this same sort of issue with patch management technologies, distributed policy enforcement technologies, anti-virus technologies, HIDS technologies, etc.  In almost every case we are able to use these technologies to penetrate or at least to assist in the penetration of our target.  While most of these technologies introduce more risk than the risk that they resolve, there are a few good ones.  My recommendation is to have a third party assess the technology before you decide to use it, just make sure that they are actually qualified and not Fraudulent Security Experts.   

Friday, December 19, 2008

Raising Infrastructural Awareness in 2008

Before 2008 nobody had done any high visibility vulnerability research and exploit development against critical systems used to maintain our critical infrastructure.  In early to mid 2008 that all changed.  Initially Core Security released a security vulnerability for Citect SCADA. That security vulnerability got media attention because it was one that could be used to penetrate into important control systems that are used to control our infrastructure. (Electricity, Water, Gas, Oil, etc).

When the vendor released their statement about the vulnerability they downplayed the criticality of the issue in a very significant way.  In our opinion that downplay was borderline unethical and was an attempt to save face.  Fortunately for all of you who rely on electricity, running water, etc, we weren't going to stand for that.  More specifically, Kevin Finisterre our lead researcher wasn't going to stand for it. 

At first Kevin and I tried talking to the engineers about the criticality of the vulnerability.  That discussion got us nowhere fast, the engineers simply didn't want to hear it and didn't want to assume responsibility for the problem.  At that point Kevin decided to take the game to the next level, and this time the actual risk for the vulnerability would be proved. 

Kevin decided that he would write an exploit for the Citect SCADA vulnerability, after all the vendor said that it was a low risk issue right? So Kevin did just that, he wrote an exploit and published it to the Metasploit Framework.  Once word of that got out, the attitudes at Citect and those of the engineers changed so fast that heads spun.  All of the sudden this non-critical issue was a critical issue and something had to be done.

So why was it so important for us to do that? Why did we feel that it was the ethical thing to do?  Here's why....

An exploit had already been created by a few other people and was in circulation. So the bad guys had it and the good guys didn't.  When Kevin published the exploit he evened out the playing field and gave the good guys the same caliber guns.  When the good guys fired the gun the reality of their vulnerability was very apparent and only then did they jump to work on the issues. That said, some of them are still vulnerable. 

Through out 2008 we kept on researching SCADA vulnerabilities and other security issues related to Infrastructural systems.  As it turns out we caught a lot more interest than we thought we would have, and we had a much bigger impact on the industry than expected. Today Citect is taking security very seriously and many government agencies have become very aware of these risks. 

Here is a podcast where people reference the work that we've done with vulnerability research and exploit releases. They never directly mention our names (go figure) but we all know who they are talking about. 

Thursday, December 18, 2008

Utility Companies and Food for Thought

Something that I keep on hearing from engineers (power, water, etc) on the SCADASEC mailing list is that they are more concerned about human error causing an outage than an attack over the internet. Most of the incidents that I hear about are operator error and they involve accidentally shutting down a computer system or perhaps configuring one improperly (The utility guys like to call these "cyber" incidents). When that happens things "go to hell in a hand basket" fast and people can and do die. They seem to be more concerned about those types of "cyber" incidents than they are the hacker threat... but they're not getting it right?

The fact of the matter is that a malicious hacker could trigger any number of these "cyber" incidents either deliberatley or accidently, and the end result is the same. How do we get these guys to take the threat more seriously? I think its happening, but I don't feel like its happening fast enough.

Wednesday, December 17, 2008

Fradulent Security Experts

So I've been participating in the penetration testing mailing list that is hosted by securityfocus and I can't say that I am impressed. In fact, I might even go so far as to say that I am concerned about the caliber of the people that are offering paid services, here's why.

When a customer hires a security professional to perform a Penetration Test, Web Application Security Assessment, or any other service that customer should be getting a real expert. That expert should be able to assess the customers target infrastructure, application, or whatever and should be able to determine points of vulnerability and their respective risks. But that is not what I am seeing.

The other day a self proclaimed "expert" asked how dangerous a SQL Injection vulnerability was. They apparently identified a SQL Injection vulnerability in their customer's website but didn't know what to do with it!!! They also asked about how to exploit the vulnerability and what successful exploitation might do.

Well the first thing that came to mind was "Why the hell are you offering services if you don't know what you are doing?". I actually asked that but I didn't get any response back from the original author. When someone hires a security professional to deliver security services they expect those professionals to be subject matter experts. The unfortunate thing is that in most cases the customer has no way of verifying the professional's expertise and the customer gets taken for a ride. (Take a look at our white papers!!!)

Another example is in a recent vulnerability that one of my team members found. He was researching a product's webservice and found that it was just chalk full of holes. When he contacted the vendor, they responded with "but we just had a very extensive security assessment done against our product". We certainly couldn't tell... looks like they got taken for a ride like so many others.

Why is this a problem, why do I care? Its a problem because the providers who offer these low quality services advertise the same way as the high quality providers. The difference is that their service doesn't do anything to protect the customer, and ours does. We're not the only good security company out there, but we are one of very, very few.

Wednesday, December 10, 2008

Conference with Green Hills Software

I recently gave a speech with Green Hills Software, Inc. in California. The presentation covered the real threat that businesses face as opposed to the theoretical threat that most people seem to worry more about. I also made it a point to uncover some of the more unorthodox attack methods that hackers use like the spreading of infected USB Sticks in parking lots or the use of rapid Distributed Metastasis.

Here are some articles that were written as a result of the conference:


Monday, October 13, 2008

Die Hard 3 - Our Infrastructural Systems

Society has one very critical technological underpinning that goes un-noticed by most people, but not hackers. If you’ve ever seen the most recent die hard movie then you’ll have an idea of what I am talking about. That is, the world’s critical infrastructures are vulnerable to attack by hackers (scary but true). These infrastructures include but are not limited to Water, Power, Communications, Transportation, Chemical Plants, etc.

Critical Infrastructure existed well before the advent of the Internet. The systems that were deployed to support the infrastructure were designed for stability, reliability and redundancy. These are computer systems that are used to control massive pumps, generators, cooling pools, the flow of gas, and other critical devices. A failure in one of those computer systems can translate to a failure in one of those critical devices.

When Infrastructure’s IT Infrastructure was first built, remote measurement devices would report data back home via dedicated network connections. In some cases people would physically go to remote locations and take readings and report those readings back to the headquarters. Recently however, Infrastructural businesses realized the cost benefit of using the Internet in place of the dedicated lines and the traveling meter-reading engineers. What they didn’t consider what the seriousness of the Internet threat, and the capabilities of those who create the threat.

As a result Infrastructure in every developed country contains critical technological vulnerabilities that have yet to be discovered. Those vulnerabilities if exploited successfully could result in damages ranging from basic system outages to the deaths of many people. This is the cost of a premature reliance on technology that people don’t fully understand.

To make matters worse the solution isn’t easily implemented. The problem is clouded with political noise, egos, and old time engineers that resist change. Some of them might actually fear for their jobs as they well should if in fact their skills are not unique. Others should fear for their jobs because they have neglected to protect critical infrastructure from the hacker threat. This problem isn’t a new problem and its existed for quite a while now, but we’re working to turn up the heat.

Yet still its not quite that simple. Many of these systems can't just be patched, some of them are upgraded with fork lifts. The ones that can be patched, can't still be patched because for them to go off-line means that you lose power, water, emergency services etc. Worse yet, if a patch is applied and that patch fails 90 days after its running, then it can kill people. So the threat is literally two sided. The fix creates a threat, and the hackers create a threat. How to resolve this without having either threat align with the risk?

If you are interested in following the conversations then you should subscribe to the SCADA Sec mailing list. The list is made up of a wide range of IT experts including Security Specialists, Control System experts, and Control System Security experts. As a group we’ll solve this problem, but if we keep arguing about semantics then we’re all in trouble.

Wednesday, September 10, 2008

CitectSCADA Exploit Release

SNOsoft/Netragard's Kevin Finisterre recently released an Exploit, not Attack Code, to demonstrate that a critical vulnerability does exist in Citect's CitectSCADA product. This code was released so that users of the product could accurately determine their own level of risk and exposure as well as determine the seriousness of the risk it creates as it relates to their infrastructure. This code was released after the vendor, Citect, had created a fix for the vulnerability and after people had been given sufficient time to implement the fix.

It is important to understand that the risk to Infrastructural businesses existed well before Kevin released his exploit code and well before Core Security released their advisory. The risk was born the moment the programming error in the CitectSCADA product happened. When Core Security identified the risk and notified the vendor they began the process of defending Infrastructural businesses against attack.

Citect responded very rapidly and appropriately to Core's discovery and released a fix for the issue. Shortly thereafter, Kevin created a working Proof of Concept ("Exploit") that enabled users of the CitectSCADA technology to test their own networks to see if in fact they were vulnerable to attack. In addition, Kevin worked with other security experts to help get an Intrusion Detection Signature developed that would detect any attempt at attacking a vulnerable system. That signature is available here.

In all reality Kevin's exploit code was very unlikely the first version. Chances are very high that other hackers had already created an exploit to penetrate into the CitectSCADA computer systems. Kevin's release of his version of an exploit for this vulnerability has a powerful negative impact on the value of the exploit to malicious hackers. When a malicious hacker attacks a network it is important that they are not detected. As such they tend to attack vulnerabilities that are unknown to the general public. Once a vulnerability is disclosed to the public it is detectable and it looses its appeal to malicious hackers very quickly.

Not only is the value of the exploit diminished by disclosure, but now the chances of the exploit working against a target are also diminished. This is because network and system administrators can test their own networks using Kevin's tool and build defenses to defeat the attack even if they do not apply the Citect patch.

In closing, I would like to commend Citect for doing such a good job at dealing with this issue. Likewise I'd like to commend the researchers and the people that pushed so hard to get this issue the attention that it needed. This is the first major step in the right direction to protecting our Infrastructural businesses, and those businesses are the most critical to our survival. Also please remember, Citect's vulnerability is not unique. All software is vulnerable at one point or another.

Here are the articles:

The Register:
CSO Magazine:

CIO Magazine:
PC World:
Network World:

Wednesday, September 3, 2008

Friday, July 11, 2008

Core Image Fun House - Advisory

Netragard's SNOsoft Research Team discovered an exploitable buffer overflow vulnerability in Apple's Core Image Fun House version <= 2.0 on OS X. Netragard notified apple and released a formal advisory that can be found here. Proof of concept is included in the advisory.

Monday, June 30, 2008

More Apple Bugs

I realize that it has been a while since I've written anything to our blog and I assure you its because our team has been busy. With that said, we've been sitting on a few vulnerabilities that were discovered a while ago waiting for the vendor to release patches. Those vulnerabilities are going to be released very shortly on Netragard's website and to the mailing lists, but here's a sneak peek.

1-) Funhouse vulnerability with exploit code.
2-) LP vulnerability, also with exploit code.

These should be posted within the next two weeks.

Wednesday, January 23, 2008

HackerSafe pwned

Back in early 2000, Kevin Finisterre and I were talking about HackerSafe and the risks that it posed to its customers. Primarly, if hackers monitor all HackerSafe websites they will know when to attack a site based on the presence of the HackerSafe logo. Another issue that we have with HackerSafe like services is that we feel that people are getting a false sense of security. Automated tools like the ones used by HackerSafe (scanalert) do not identify the security holes that most hackers use to break into networks, instead they only identify the known issues.

Don't get us wrong, there is value in the services that are being offered by ScanAlert. Their services help businesses keep up to date with patches and prevent businesses from missing the obvious and low hanging fruit. For that very reason services like HackerSafe have a very good ROI. Just don't feel 100% because you've got the logo, you're never 100%. Here's an article where our CTO commented on the recent HackerSafe pwnage.

Saturday, January 19, 2008

Hackers attack power companies

For quite some time I've been giving speeches and talking about the physical damages that malicious hackers could cause with a well crafted cyber attack. I've discussed how vulnerable our (the world's) core infrastructure is and how easily it could be disabled. As a result many people have called me a conspiracy theorist, or accused me of exaggerating. Well, unfortunately now I can say "I told you so." This isn't the first time that hackers have attacked this kind of technology, the US Department of Defense did it during the Aurora Generator Test.

Friday, January 11, 2008

ZDNet Australia

Netragard's CTO was quoted in the following article titled "2007: How was it for Apple". Here's the article and here's the quote:

Adriel Desautels, chief technology officer for security company Netragard and founder of the SNOSoft research team, said: "If OS X had the same installed base as Windows, Linux and other systems, it would be less secure or at the very most, as secure as the other systems ... It's just a matter of what [attackers] focus on."