Through our award-winning IT-GRC platform, SecureAware®, we recently completed an asset-based ASV proof-of-concept demonstration to a very large merchant. This organization has over 3,000 locations globally and manage over 90,000 network assets. The consideration for the integration of asset scan data was two-fold. First, our objective was to prove the ability to automate the process of integrating the raw scan data, by asset type, identified vulnerability, and recommended remediation plan. The remediation plan was also linked (by type / class) to the policy set for instantaneous access by the asset owner. The second objective was to demonstrate the ability to integrate this information into the workflow by assigning the vulnerability to a specific asset owner along with a scheduled completion date and the ability for the task to be tracked by not only the asset owner but also the supervisor and any other designated observers / interested parties. This is all being done in an environment that captures timestamp and associated documentation for complete auditability.
Our next steps with this merchant are to collect the specifications for integration of this data into their now-current network asset compliance system to augment internal tracking, improve workflow, increase visibility into IT risk management posture – all in an effort to reduce their costs of compliance in the long-run.
Gary B. Blume
Senior Vice President - Corporate and Business Development
Lightwave Security, Inc.
Atlanta, GA
Office: 404.939.8875
Mobile: 404.276.6192
Fax: 404.751.2830
E-mail: gblume@lightwavesecurity.com
Linkedin: http://www.linkedin.com/in/garyblume
Monday, October 25, 2010
Tuesday, October 19, 2010
How to implement ISO 27001 - Free Webinar!

Hello,
I wanted to let you know that we are organizing a free webinar called "How to implement ISO 27001?".
This free one-hour training is designed for organizations that plan to implement ISO 27001, and have no previous experience in such projects. This session will explain all the steps in ISO 27001 implementation, and provide tips on how to proceed with this complex task.
This webinar is in English, and covers the following topics:
- Plan - Do - Check - Act cycle
- ISMS scope
- ISMS policy
- Risk assessment and treatment
- Risk assessment report
- Statement of Applicability
- Risk treatment plan
- Annex A - overview of controls
- Four mandatory procedures
- Document management
- Records management
- Internal audit
- Management review
- Corrective and preventive actions
The webinar is delivered by Dejan Kosutic, the author at Information Security & Business Continuity Academy.
To register for this webinar, please visit: https://www3.gotomeeting.com/register/794135934
About the organizer: Information Security & Business Continuity Academy is the leading online resource for ISO 27001 and BS 25999-2 implementation. Visit http://www.iso27001standard.com/.
Best regards,
Dejan Kosutic
Monday, October 4, 2010
HIPAA Violations Not Always Due to Data Breaches

Contributed By:
Jack Anderson
Jack Anderson
On an early album George Carlin (RIP) talked about being raised Irish Catholic. Remarking on mortal sins he observed that if you woke up in the morning and decided to go across town and commit a mortal sin, you could save your bus fare because you already committed a mortal sin just by thinking about committing a mortal sin.
Similarly you don't have to have a patient data breach to be in violation of HIPAA rules and regulations. By doing nothing, not even thinking, you probably have already committed a violation.
For example, if you have a business associate (BA) agreement in place you are required to be compliant with the terms of that agreement, now . If you don't have a breach notification program in place you are in violation, now.
If you don't have a privacy program in place you are in violation, now.
But, you say, I am a small company and how would they know? Let me count the ways:
1.Your covered entity detects a pattern of non-compliance, like you sending unsecured PHI and is required to either help you fix the problem, or sever your contract, and report you to HHS.
2.A whistleblower, (employee, ex-employee, patient, ex-patient, wife, ex-wife, etc) reports you in hopes of collecting the reward offered by HHS.
3.An unannounced audit by OCR, the enforcement arm of HHS. They are required by Congress to audit and have hired an outside firm to begin auditing in Q4 2010.
4.A state attorney general files suite in federal court as allowed by The HITECH Act.
5.A patient data breach which must be reported.
The good news is that just starting on a compliance program earns you a lot of points. Also new cloud computing solutions are cost effective and efficient for even the smallest companies. A small company can get started for only $125 and can stay compliant and prove it for only $35 per month. This is less than your latte budget.
Tuesday, September 21, 2010
ISO 27001 vs. ISO 27002

Contributed By:
Dejan Kosutic
If you came across both the ISO 27001 and the ISO 27002, you probably noticed that ISO 27002 is much more detailed, much more precise - so, what's the purpose of ISO 27001 then?
First of all, you cannot get certified against ISO 27002 because it is not a management standard. What does a management standard mean? It means that such a standard defines how to run a system, and in case of ISO 27001, it defines the information security management system (ISMS) - therefore, certification against ISO 27001 is possible.
This management system means that information security must be planned, implemented, monitored, reviewed, and improved. It means that management has its distinct responsibilities, that objectives must be set, measured and reviewed, that internal audits must be carried out and so on.
All those elements are defined in ISO 27001, but not in ISO 27002.
The controls in ISO 27002 are named the same as in Annex A of ISO 27001 - for instance, in ISO 27002 control 6.1.6 is named Contact with authorities, while in ISO 27001 it is A.6.1.6 Contact with authorities. But, the difference is in the level of detail - on average, ISO 27002 explains one control on one whole page, while ISO 27001 dedicates only one sentence to each control.
Finally, the difference is that ISO 27002 does not make a distinction between controls applicable to a particular organization, and those which are not. On the other hand, ISO 27001 prescribes a risk assessment to be performed in order to identify for each control whether it is required to decrease the risks, and if it is, to which extent it should be applied.
The question is: why is it that those two standards exist separately, why haven't they been merged, bringing together the positive sides of both standards? The answer is usability - if it was a single standard, it would be too complex and too large for practical use.
Every standard from the ISO 27000 series is designed with a certain focus - if you want to build the foundations of information security in your organization, and devise its framework, you should use ISO 27001; if you want to implement controls, you should use ISO 27002, if you want to carry out risk assessment and risk treatment, you should use ISO 27005 etc.
To conclude, one could say that without the details provided in ISO 27002, controls defined in Annex A of ISO 27001 could not be implemented; however, without the management framework from ISO 27001, ISO 27002 would remain just an isolated effort of a few information security enthusiasts, with no acceptance from the top management and therefore with no real impact on the organization.
Thursday, September 16, 2010
Can A Business Continuity Strategy Save You Money?

Contributed By:Dejan Kosutic
You are thinking about implementing the business continuity management/BS 25999-2 standard? But then you hear it will cost you a lot? It probably will cost you, but not necessarily as much as you thought - this you can solve with good business continuity strategy.
Business continuity strategy, as defined in BS 25999-2 standard, is an "approach by an organization that will ensure its recovery and continuity in the face of a disaster or other major incident or business disruption".
Therefore, the point is to prepare yourself in the best possible manner to counteract a disaster if such would occur. This preparation can include organizational measures (drawing up plans, making contracts with suppliers/partners, exercising, reviewing, awareness raising, etc.), and measures including investment in equipment, infrastructure etc.
Time is a very important factor in recovery - if you do not recover your business in time, you will probably lose your customers and consequently lose your business as well. So the business continuity strategy must set the recovery time objective (RTO) for each of your critical activities, whereas RTO can be different for each of those.
One important consideration: the shorter the RTO, the bigger the investment you will need - for instance, if you want to recover your data centre in less than one hour, you will have to invest in an alternative location almost the same equipment as in the primary location; on the other hand, if you want to recover your data centre in two weeks, the investment will be much lower because it would be enough to store the backup tapes at the alternative location, allowing you two weeks to obtain the necessary equipment. All this means that your RTO must not be too long, but not too short either.
Once the RTO is set, you will still need to make some investment; however, with a good business continuity strategy you will be able to decrease that investment, while still being able to recover your critical activities within the recovery time objective. Here are some examples:
■you might not need your own data centre at an alternative location - in most countries you can rent such a location from a specialized company, which means you don't need to invest in infrastructure, maybe not even in equipment or software,
■you might not need offices at an alternative location - employees who do not have to meet customers face-to-face can work from their homes,
■you might not need an alternative location at all if you have other business units at different locations which could take over the critical activities affected by the disaster,
■you might not need to purchase equipment in advance if you can find the supplier that could guarantee the delivery of equipment within your RTO
In all these examples you will need to increase your organizational capabilities, but if you want to save some money, it sure is something worth thinking about.
Cross posted from ISO 27001 & BS 25999 blog - http://blog.iso27001standard.com
Wednesday, September 1, 2010
Advice for Merchants on PCI DSS

Contributed By:
PCI Guru
Barring the card brands developing a truly secure card processing process, the PCI DSS and related standards are likely to be with us for quite a while. That said, what is the future of complying with the PCI DSS?
For merchants, if you are not seeking out point-of-sale (POS) solutions that do not store cardholder information, you should be as soon as you possibly can. That includes finding card processors that do not require you to store cardholder information and can provide you access to cardholder information when you need it for resolving disputes and chargebacks.
According to Robert McMullen, CEO of TrustWave, the majority of breaches TrustWave investigated occurred with POS systems. So the rational approach to resolving this problem is to get rid of the cardholder data stored on these systems.
The problem with this is that most merchants, large or small, think that they need to store this information for some reason. If you are a merchant in the United Kingdom, France, Italy and other select European countries, then you do need to have the PAN unencrypted, however it only is required on an original printed receipt, it is not required to be stored anywhere else.
So, all merchants need to put POS solutions in place that do not store cardholder data. You do not need it and it puts you at risk if you do store it.
The next thing merchants need to do is to find a card processor that does not require the merchant to store cardholder data. This can be a processor that uses tokenization or whatever, but the bottom line is that the processor does not return cardholder data to the merchant’s systems.
These processors typically provide secure Web-based systems that allow the merchant to view all of their transactions processed and, if necessary, provide a method to decrypt the PAN for dispute research and chargebacks.
Merchants need to restrict access to the processor’s applications to only those people that absolutely need access to perform their job. These people should be reviewed at least quarterly to ensure that they continue to require access.
For those of you that just cannot get rid of cardholder data, there is the option of hashing. Hashing allows applications such as fraud discovery, member tracking, rewards programs and similar functions to continue, they just do not have access to the actual PAN.
A hashed PAN results in the same hashed value, so research and analysis of PANs can still occur. It is just that if you need to see the real PAN, you will have to go to the processor’s system to obtain the real PAN.
The travel industry, in particular hoteliers, are really behind the eight ball on PCI because of their need to keep the PAN for sometimes years because of the way reservations work. However, this is where tokenization can earn its keep.
If a hotel takes a reservation and gets back a token when the credit card is authenticated, then the hotel can use the token however many times in the future for check in and check out. Again, there is no reason for the hotel to need to retain the actual PAN.
The bottom line to all of this is that there are ways to minimize your organization’s PCI compliance efforts just by getting rid of the data in the first place. So, stop putting forth efforts to comply and get with the movement to get rid of the cardholder data in the first place.
I have had a few clients go down this road and PCI compliance is now a piece of cake. Their networks are still in scope for transmission, their applications in some cases do process cardholder data, but there is not storage which makes them much, much less of a target.
Wednesday, August 25, 2010
Communicating with Busy Business People
contributed by: Jeff Snyder - SecurityRecruiter.com
Communication Overload
Our world has been overrun with automated phone calls, emails, facebook alerts, invitations to connect, text messages, tweets, newsletters, webinar invitations and the list goes on and on.
Though it is likely hard to believe, this security recruiter receives in excess of 500 emails per day. Add to that additional messages on various social networks like facebook and LinkedIn and pretty soon, the amount of data I review over the course of a day is overwhelming.
I’m not by any means alone in this world of data overload. Think for a moment about a busy human resource professional’s potential to be overloaded and quite frankly overwhelmed with data. I’ve been told by hiring authorities that they’ve been the recipient of 300-500 resumes when they post a job to a major job board.
Too Many Emails
If they experience what I’ve recently experienced, these hiring authorities are likely overwhelmed as well. Consider this recent example and then multiply it many times over. A security job seeker sent a resume to my office at 3 PM on a Friday afternoon. By that time in the day on a Friday, the rest of the day is booked and I’m trying to get out of the office by 6 PM to spend time with my family.
This resume that arrived at 3 PM was not reviewed on Friday afternoon. Having not received an instant response, the security job seeker sent another resume at 8 AM on Monday morning. Add this email to the Friday email and there is a small pile building. Monday mornings in the office of a security recruiter, a human resources representative or a hiring decision maker are very busy.
On Monday afternoon, the security job seeker who sent an email on Friday afternoon and then again on Monday morning sent another email asking about the status of his resume that had yet to be reviewed. Add these emails to the hundreds of other messages that accumulated in our office between Friday and Monday and anyone can soon see how data overwhelmed we can become.
A Different Approach
On a practical level, may I suggest that a resume sent to anyone anywhere late on a Friday afternoon will likely not be reviewed until Monday. Sending another resume on Monday morning before the recipient has a chance to clean up a weekend’s worth of junk mail to get their Inbox down to relevant business communication is probably not a good idea.
Consider instead waiting until later in the day on Monday and leaving a carefully thought out voice mail. The right voice mail left at the right time for the right person just might cause that person to go into their overflowing Inbox where they just might look for the Friday email mentioned politely in the carefully thought out voice mail.
Consider This Approach
Here is another idea. I recently followed up on a security job opening at the “C” level. The position has been open since January so while I didn’t know what was wrong, I knew that something was wrong with this company’s search process. Rather than sending email, I placed a strategically timed call with the company’s global CSO. Even thought I caught the CSO in a meeting, we were on the phone long enough for me to get the CSO’s permission to send an email. He even stopped long enough to give me his email address.
I followed with a well-written email that briefly introduced my company. Remember, the CSO was now expecting my email even though he didn’t know me prior to the moment when I made his phone ring.
A couple of days went by without any communication from this prospective new client. I had a suspicion that the CSO was traveling and I was correct. After several days, I sent another email to the CSO. This email was different from the first email and it quickly caught the CSO’s attention. While sitting in another country, the CSO communicated with an HR Director in the US and asked her to schedule a call with me.
Setting a business appointment happens this way all the time, but a carefully planned communication approach with thoughtfully spaced-out communication attempts will be appreciated by busy business executives more often than not.
Connecting With Busy Business People
The next time you’re trying to reach a busy recruiter or a busy human resources representative or a busy hiring manager, stop and think about the data overload the person you’re reaching out to might be experiencing. As silly as it might sound, try calling the person you’re trying to reach and asking for permission to use their Inbox. Since this type of call rarely comes in to my office, I guarantee you that you’ll be setting yourself apart in a good way by asking permission before adding another email to a busy person’s Inbox.
Communication Overload
Our world has been overrun with automated phone calls, emails, facebook alerts, invitations to connect, text messages, tweets, newsletters, webinar invitations and the list goes on and on.
Though it is likely hard to believe, this security recruiter receives in excess of 500 emails per day. Add to that additional messages on various social networks like facebook and LinkedIn and pretty soon, the amount of data I review over the course of a day is overwhelming.
I’m not by any means alone in this world of data overload. Think for a moment about a busy human resource professional’s potential to be overloaded and quite frankly overwhelmed with data. I’ve been told by hiring authorities that they’ve been the recipient of 300-500 resumes when they post a job to a major job board.
Too Many Emails
If they experience what I’ve recently experienced, these hiring authorities are likely overwhelmed as well. Consider this recent example and then multiply it many times over. A security job seeker sent a resume to my office at 3 PM on a Friday afternoon. By that time in the day on a Friday, the rest of the day is booked and I’m trying to get out of the office by 6 PM to spend time with my family.
This resume that arrived at 3 PM was not reviewed on Friday afternoon. Having not received an instant response, the security job seeker sent another resume at 8 AM on Monday morning. Add this email to the Friday email and there is a small pile building. Monday mornings in the office of a security recruiter, a human resources representative or a hiring decision maker are very busy.
On Monday afternoon, the security job seeker who sent an email on Friday afternoon and then again on Monday morning sent another email asking about the status of his resume that had yet to be reviewed. Add these emails to the hundreds of other messages that accumulated in our office between Friday and Monday and anyone can soon see how data overwhelmed we can become.
A Different Approach
On a practical level, may I suggest that a resume sent to anyone anywhere late on a Friday afternoon will likely not be reviewed until Monday. Sending another resume on Monday morning before the recipient has a chance to clean up a weekend’s worth of junk mail to get their Inbox down to relevant business communication is probably not a good idea.
Consider instead waiting until later in the day on Monday and leaving a carefully thought out voice mail. The right voice mail left at the right time for the right person just might cause that person to go into their overflowing Inbox where they just might look for the Friday email mentioned politely in the carefully thought out voice mail.
Consider This Approach
Here is another idea. I recently followed up on a security job opening at the “C” level. The position has been open since January so while I didn’t know what was wrong, I knew that something was wrong with this company’s search process. Rather than sending email, I placed a strategically timed call with the company’s global CSO. Even thought I caught the CSO in a meeting, we were on the phone long enough for me to get the CSO’s permission to send an email. He even stopped long enough to give me his email address.
I followed with a well-written email that briefly introduced my company. Remember, the CSO was now expecting my email even though he didn’t know me prior to the moment when I made his phone ring.
A couple of days went by without any communication from this prospective new client. I had a suspicion that the CSO was traveling and I was correct. After several days, I sent another email to the CSO. This email was different from the first email and it quickly caught the CSO’s attention. While sitting in another country, the CSO communicated with an HR Director in the US and asked her to schedule a call with me.
Setting a business appointment happens this way all the time, but a carefully planned communication approach with thoughtfully spaced-out communication attempts will be appreciated by busy business executives more often than not.
Connecting With Busy Business People
The next time you’re trying to reach a busy recruiter or a busy human resources representative or a busy hiring manager, stop and think about the data overload the person you’re reaching out to might be experiencing. As silly as it might sound, try calling the person you’re trying to reach and asking for permission to use their Inbox. Since this type of call rarely comes in to my office, I guarantee you that you’ll be setting yourself apart in a good way by asking permission before adding another email to a busy person’s Inbox.
Wednesday, August 18, 2010
Better Security Through Sacrificing Maidens

Contributed By:Pete Herzog
I began this as an an answer to some questions but then I realized I will never successfully explain the OSSTMM 3, security metrics as the ravs, and trust metrics if I only answer the questions asked. I need to address this properly by explaining the background as well because the OSSTMM 3 is apparently very different from what most people expect out of a professional security model or what they even think security is.
I think the problem people have with the OSSTMM 3 is that they expect that some things are required or necessary in security and they just don't find it there. They think estimating attack frequency, attack types, and vulnerability impact are all needed to properly and successfully defend themselves. But those things aren't used in the OSSTMM (except in very special cases of physical and wireless security verification testing) to build "good enough" security. This leads people to think it's missing or wrong.
Now we all see people who say that security is about the process and we see them fighting a losing battle. We see them just do more of what they're being told to do by the compliance requirements, books, and blogs and it's not working or it's not scaling. The problem is we are being taught to build defenses like consumers and it isn't working.
That's why we took a different direction with the OSSTMM 3. If we keep doing what we know doesn't work even "good enough", why keep doing it? It wasn't until we accepted that there are things we can never reliably know that we knew we had better find the limits to that which we did know. So then at least we'd have that going for us. For example we know that we can't reliably determine the impact of a particular vulnerability for everyone in some big database of vulnerabilties because it will always depend on the means of interactions and the functioning controls of the target being attacked. But we do know how a particular vulnerability works and where. Which means we needed a way to categorize and rate vulnerabilities not on some arbitrary weight of potential impact but rather on what they do. Then anyone who has defenses where operations match where the vulnerability is and are missing the controls which would contain or stop the vulnerability then we would really truly know if there would be an impact greater than zero for them. Therefore by focusing on operations, we can devise tactics to respond to them.
Next we realized we had to look for the security particle. What can we use to make security? Where is the security equivalent of materials science? How can we reliably build a strong defense if we don't know what it even means (or more interestingly, how the hell are we selling it if we don't know what it means). So we needed to do some serious fact finding. We needed ground rules that we know we can use as a solid foundation. For example we know that there's only 10 types of operational controls which can be applied, 5 which protect through interaction with the threat and 5 which don't. We know that authentication will ALWAYS fail if either authorization or identification are stolen or misappropriated. We know that there's only 2 ways to take a physical asset- you either take it or you have it given to you. We know that operations require interactions with something and that something can be malicious. So we designed a way to reliably verify what we know and organize the information into intelligence.
Now that we were fact-finding, we found that much of what was assumed fact and turned out to be false came from opinions from authority. As a matter of fact, did you know that there's a huge, common body of security knowledge out there built mostly on anecdotal evidence and authoritative opinions passed around via transitive trust (X trusts Y and I trust X so I can trust Y) that is used as if it's all true? I know, I am shocked as well! So all this led to a general hack and slash through OSSTMM 2 leaving it as hollow as a pun at a funeral. We needed to start over using only the facts.
As we built the new OSSTMM as version 3, we began presenting and teaching these facts. I won't lie to you and tell you it was as pretty as a royal wedding in June. There was, ummm, "resistance". The consensus was that you can't deny the fact that some attacks are more persistent, more threatening, and more damaging than others. We didn't. Instead, the security industry wants you guessing how criminals are going to attack, which is often a psychological exercise of "thinking like a criminal" accomplished by people with nice homes, nice jobs, and a good night's sleep last night. Did you know you can even be certified that you can think like a hacker because you use the same tools as them? I know, I am shocked as well! They like to tell you that criminals follow a pattern but they really don't (see the Hacker Profiling Project for evidence of that). What we were seeing is the inherently unqualified opinions present in Risk marketed as fact within the security industry. Risk is a real thing. It exists. However the results from determining Risk is often made up.
Insurance companies use mountains of historical data to reduce risk. Wall Street uses mountains of trends current to the most recent second to reduce risk. Casinos use predetermined probabilities to reduce risk. As it turns out, the security industry uses quick response to reduce risk. Whether it comes from attacking our own software in vulnerability research, the use of AntiVirus to show us what's infecting us, or any of the hundreds of types of ways we have to show us we've been hit, security is an industry that uses current losses to protect future investments. Not only is that pretty dangerous but it's a horrible case of tunnel vision because it leads to defenses against specific attacks which had already happened. So the typical enterprise security today is one that is properly prepared to sacrifice something to an attacker now so they will be 100% prepared against it later.
For this backwards method we have to thank all those who think they should use Risk in the security industry. However they don't realize it can't work like it does in the other industries. For example, in security, types and areas of attack change with technology so the use of historical data like Insurance companies have is just not relevant. Unlike Wall St., we can't watch all the current trends with enough insight to know what the next attack will be for certain or with enough speed to react before they hit. Although that doesn't stop security from making it look like it does by secretly telling software companies of holes who then release patches for them which security experts predict will get reverse-engineered and turned into exploits and then are lauded for their predictive prowess by their followers when it inevitably happens (anywhere on any scale). And you maybe wondered why some security researchers get ticked off when you do full disclosure- because you're SOC-blocking their moves!
Another valid point is that Wall St. races other people to jump on and off trends whereas we need to race packets which travel at the speed of light. This also makes me wonder if the people who bought into "real time network monitoring" heard the fable of the tortoise and hare so often as children that they took it literally (or never turned on a light switch?). Finally, we also don't have the luxury of allowing some big losses like casinos where we can fix the odds and just hope to survive through the heavy hits because we'll win in the long run (although it looks like some government departments are actually trying this).
Now some of the Risk analysts within the security industry tell us that the problem isn't that we can't predict it but that there's too many data points right now to reliably guess the future. Basically, we need to get better at guessing. They say we need better models because then we can better forecast the problems. I see this approach in the other industries and I don't need to tell you how poorly it prevents financial meltdowns on Wall St., how exclusionary the guidelines are for getting pay-outs from Insurance companies, or how many lives are destroyed through gambling addictions at casinos. The truth is that in all other industries using Risk there has to be a loser. And the loser, unfortunately isn't the attacker. It's one of us. It's one of the ones we should be defending. It's like the story where the king feeds a maiden to the dragon every full moon to protect the rest. The dragon isn't losing. This is not the way to keep a town safe by sacrificing some of its denizens so the others can survive. What happens when another dragon shows up? And werewolves? And then the people turn into zombies? Threats change and come from unexpected places. The worst way to handle threats is to try to estimate them out of existence with Risk. Because it allows you to ignore some of the impact as inconsequential to the greater, or more selfish, number of beneficiaries. If you remember the story, the king didn't like it too well when it was his daughter that was fed to the dragon.
When we look at why we need Compliance it's because of selfishness. Businesses put their profits above defending their customers and business partners. Interestingly, the Compliance rules themselves are written to the greater good, which means that some companies won't be able to afford the required products therefore can't do business their way online. So the rules need to be lax enough so that only an acceptable amount of companies can't afford them. Still some of those who can't afford them will try to circumvent the rules to stay in business. But the Risk estimates will have considered this and make sure that only an acceptable amount of people will get hurt by those companies. What you have here is the use of Risk to further manage Risk and it's not working. We're just feeding the dragon.
At ISECOM we saw that what we needed was a way to create security so that the only loser would be the attacker. Which meant we had to do it without regard to the type of attacker, what their motives are, and what the probabilities are that they will only want to eat a maiden during the full moon. That's how we learned that you don't even need to know what the threats are or might be to defend against them reliably. See that's the funny thing because you are protecting against the unknown anyway. So if you don't need to know that then you don't need to know the impact of a particular threat or the result of a particular vulnerability either. You just need to know what limits your controls have on them and which operations are interactive with which parties. Now this isn't us saying that Risk goes away, no, not at all, but what we are not doing is looking for acceptable or "good enough" security at the expense of our own. So we do not use Risk to build our security. Instead we suggest you use the facts we know about security and the facts we know that give us reason to trust.
To build and verify security without using Risk, you need to learn the three main tools in the OSSTMM 3 which help you do this. Without them, you won't be able to do it successfully. Most importantly, you won't have to rebuild from scratch to do it. You just need to verify and categorize what you have and how it works.
The three tools are operational security metrics, trust metrics, and an OSSTMM 3 test. All three come from the same research but all three provide different intelligence. This leads people to get confused and find the whole thing to be overly complicated, apparently worse than guessing which is easy to do although nearly impossible to do consistently right (which is why in security these days being right isn't as important as showing your work because failing through status quo is also success in this screwed-up security culture with such acceptable CYA phrases like "If an attacker wants in they'll get in no matter what." and "There's no such thing as perfect security.")
The OSSTMM 3 test provides the following intelligence:
1. What the scope is and which were the targets tested,
2. What the test type and vector are,
3. Classification and enumeration of interactive points (operations),
4. Classification and enumeration of operational controls,
5. What types of tests were NOT performed on the scope,
6. What are the limitations of any of the controls,
7. Which operations do not work as expected (usually provide additional, unwanted or unknown interactive points).
That is what you need to know in order to calculate the Attack Surface for which we use the ravs, a measurement like mass, which shows the balance between controls, limitations, and operations. The really good thing about ravs is that they are not weighted. Therefore the values of certain vulnerabilities do not come from someone's assumptions of impact but rather from which interactions you allow and which controls you have in place to assuage damage. This flexibility means that ravs can be compared regardless of target types or scope.
The OSSTMM test results are designed to provide a lot of different information clearly for the analyst. What it won't do is tell you which kinds of attacks are coming, how often, from where, and what the financial loss will be of that attack. But you do now have much more exact information to calculate those things if you want to because you know exactly how vulnerable each system is alone and collectively, the points of failure, the only places where an attack can be made, the lack of controls, and the redundant, useless controls. You know what wasn't verified and is therefore unknown.
You might not know if an exploit will happen but if it can, you'll know the paths it can take, what servers or services will succumb to which type of attack and therefore the only types of attacks you can expect to get through, and that which will not because they have the right controls.
Now to better organize that information we have the STAR and we have the rav calc sheet in open doc format and in XLS format.
The STAR allows you to give a new type of overview to your client which shows exactly what is deficient, where and why. It shows which tests were not done and why. It allows for future comparisons with other tests from other consultants. It allows for continuous internal verification and measurement of change or improvements. It allows a business to manage security based on need instead of speculation. Therefore, a business could address Compliance by having a particular percentage rav instead of particular products. It would turn an enterprise's security from being a reactive, consumer culture to a preventative, resourceful one.
The rav calculation sheet is how the Analyst organizes the information from the OSSTMM 3 test. A security test may require multiple rav calc sheets as a new one is suggested for each change in vector, channel (physical, wireless, data networks, etc.), or type of test (black box, gray box, reversal, etc.). All these can later be combined in aggregate for a "big picture" but for analysis purposes, it is easier to keep them separated. This sheet will let you see easily what needs controls, which are redundant, and which services should be closed. One of the more interesting things you'll see when you use it is how narrow the controls are in the modern "secure" network. Sure, it's defense in depth but that doesn't help you when you're protected by the same type of control to the core. Bypass one type and you bypass them all. Most all the modern security is focused on Authentication, which is interesting because the identification process everywhere is pretty bad and on the Internet it's downright awful. Next, you'd see some Confidentiality because of all the encryption being built into protocols by default. However it's Alarm that is the most prevalent control because modern network security is reactive. It's all about waiting for the dragon to show up and feed on one of your maidens before alerting the rest of the town that the dragon came back.
The rav calc sheet can be as granular as you want such as in the SCARE project which shows how to use it with source code or the companies who use it to measure web app attack surfaces need to do it by interactive points in the web app itself. So you get the info you need at your finger tips to make bigger, better decisions. One of the handy things about this is placing monetary values on the server, service, app, or whatever based on business process requirements. These provide you with historical business data from which to make business forecasts to compare to how much it cost that server, service, or app to make and what it costs (perhaps annually) to keep it running and controlled. This sheet can be your sandbox. Right on the sheet you can play your war games by closing services, add the results of products you haven't bought yet, see what happens when a particular service is compromised or denied, etc. to see how much it changes the attack surface before you physically change a single thing on your servers. That rav delta can then be assigned a value base on operating costs and income from the business processes it is a part of to see if the new product gives enough bang for the buck or not.
Now trust metrics are almost a different beast. How they relate to the OSSTMM is that the factual information you get from verification can be used in the trust rules you generate to make a decision. The trust metrics help you fill the gap in the OSSTMM by helping you understand what cannot be verified or known by having you examine what your reasons to trust something are. Trust metrics you apply when you need to know how to approach the unknown. In that way it is similar to Risk but the similarity stops there. It lets you compare what you have and know to degrees of what you don't know in an even fashion. By only looking at what reasons you have to trust something new you avoid falsely speculating, something human beings are notoriously bad at.
For example, you would use trust metrics to determine if a new partner network should be connected to your own. Or how much access you would give to the visiting consultants. Or if you can depend on that new cloud provider. You could get rav scores from each network but that won't help you if they are secure against the world but malicious to you. So you use the trust metrics to determine how much reason you have to trust them and why. The properties you measure them against can be found here. It has you evaluate reasons to trust against 10 non-fallacious rules and shows you which reasons to trust are strongest and which are weakest. Therefore, hopeless romantics beware, it may cause uncomfortable flashes of reality.
The end effect of trust metrics is that if you did this for each partner, you could create a framework contract that specifically highlights the weak trust areas to create greater assurance. Or you could say no and show them what you need before you say yes. Or you could make the financial rewards more substantial for yourself or the penalties higher for them. Or you can just give them less access with greater controls if you want to be politically correct about the whole thing. With trust metrics you act and protect to an acceptable level of interaction rather than an acceptable level of loss. What you definitely don't need to do is take a chance based on an estimate of the acceptable number of systems which could fall to the malicious attacker. Because that again would be just feeding the dragon.
Hopefully I've explained here clearly why we did what we did with OSSTMM 3. Combining the OSSTMM 3 verification results with ravs and trust metrics lets you build stronger infrastructures by looking at where you are strong against everything you have no reason to trust.
Now, whether or not you agree with what is said here, and some may have fundamental problems with our reasons for taking the OSSTMM 3 in the direction which we have, you cannot dispute the value of the information provided by an OSSTMM 3 test. Some of you may be wondering what the Risk would be to give up on Risk and try such a strange, new method. You can only answer that for yourself. Only you know if your Risk method of security will scale indefinitely with you, if the costs of speculation and response products and processes is greater than the actual losses for you, and if you have enough maidens in your organization to feed all the dragons who show up during the full moons.
Tuesday, August 17, 2010
PCI Feels Like Something is Being Done to Me

Contributed By:PCI Guru
The title of this post is from a quote from a research paper published by Forrester titled ‘PCI Unleashed’. Some of the thoughts discussed by Forrester are so true; I thought I would share a few of them. If you have an opportunity, this is a paper well worth its cost.
One of the key points of the Forrester paper is that PCI was the result of a failure in corporate governance. Forrester correctly points out that had organizations focused on keeping cardholder data properly secured in the first place, the PCI DSS probably never would have existed.
I can confirm that corporate governance is a root cause from my own experiences. We talk a lot to organizations that think PCI is someone else’s problem and that their organization is being singled out.
In a lot of these organizations, security has been given the short shrift and has been perpetually on the back burner. In these organizations, senior management sees security, and IT as a whole, as a money pit that does nothing for the organization.
This is because senior management is ignorant to what IT and security brings to the table because they have hired security and IT leaders that are mice that are occasionally seen, but certainly never heard.
The card brands have never been shy about why they generated the PCI standards; it was to protect their brands. After all, when a breach occurs, it is almost always reported by the media as, “X number of Visa/MasterCard/American Express/Discover credit cards were disclosed by ABC Company.” The card brands are always called out first, followed by the company and, if the company is a franchise operation, sometimes the franchisee is named. The problem is that the general public typically only remembers the card brand names, sometimes the company name, and usually never remembers the name of the franchisee.
Want proof of this, just look at how badly TJ Maxx, HomeGoods, Marshalls and the like suffered after their parent, TJX Companies, incurred one of the largest breaches in history two years ago – NOT!
In a public relations coup, TJX Companies got the media to use TJX as the name of the breached organization which protected the brand names of their actual retail outlets. As a result, sales at their retail outlets were only slightly affected by the breach news. Another point that Forrester brings up is that naysayers point to the fact that PCI compliant companies have been hacked, therefore the PCI standard must not be effective. As I have argued time and again, PCI compliance is not a one time, annual thing. Compliance requires consistent execution of all of the PCI DSS requirements in order to remain compliant. Consistent execution is a struggle for even the most diligent organizations. It requires constant commitment by employees and management to the importance of doing security right all of the time. The problem is that we are all human and humans are fallible. So lapses will occur in any organization.
This is why all security frameworks are built on the concept of overlapping controls so that should a control go out of compliance, the whole house of cards does not come down. What differentiates a good organization from a bad organization is that a good organization does not have so many lapses at once that entire control structures fail.
If you read the reports from TrustWave and Verizon Business Services on breaches they have investigated, a significant portion of those breaches were the result of systemic failures of the control environment.
Related to the previous point is another good point made by Forrester that the PCI standards are just a baseline and that organizations must go beyond the PCI standards to be as secure as they can be. The PCI DSS is just the ante to be in the game. If you want to be certain, you need to go beyond what the PCI DSS requires.
Why? Because new flaws are discovered in software or new techniques are developed that make prior security methods obsolete or no longer as effective. As a result, your security systems must adapt or new security methods need to be developed to detect and circumvent these new threats.
The PCI standards may address these new threats in a future version, but it is your organization’s responsibility to deal with them first. This is also why most security experts say that security is a journey, not a destination.
One point of contention I have with this report is that they state, “Companies that already have robust security policies, processes, and technology do not have difficulty with PCI.” Having worked with a number of organizations that meet these criteria, I can attest that this is not necessarily the case.
A lot of them have very robust security policies, processes and technologies; however, the day-to-day consistent execution was haphazard at best. Management believed that they were in pretty good shape for meeting the PCI standards. As we pealed the onion on their security environment, it became obvious that all management was seeing was a facade of security.
Security people said all of the right things and their policies and procedures said all of the right things, but the actual execution was not even close to consistent. As they like to say, “They talk the talk, but they do not walk the walk.”
As a result, these organizations have struggled to change ingrained cultural issues and “bad habits” that can be much tougher to deal with than implementing new policies or technologies.
Finally, the next time you hear someone say that they feel PCI standards are not fair or they are impossible to comply with, ask them if they think they can afford a breach? The best tidbit offered by this Forrester report is their estimated cost per account of a breach. Forrester estimates that a breach costs between $90 and $300 per account breached, excluding lawsuits and any remediation efforts.
A modest breach of say 100 accounts carries an estimated cost of $9,000 to $30,000, excluding:
Legal representation – you know that your organization will be sued or threatened to be sued over a breach and that will require your lawyers to go into action. If you think lawyers are cheap, thing again, particularly when they are fighting a lot of battles;
Public relations – just as in politics, your organization will have to put the best face on such an incident. If your organization does not, the media will provide that “face” without you and it likely will not be a “good face”;
Investigation – the card brands will require a forensic examination to be performed. If you think lawyers are expensive, the costs related to a forensic examination will make you believe your lawyers are cheap;
Remediation – there will be changes required to better ensure security and some of those changes will likely have a cost associated with them. Only the lucky get away with policy or procedural changes with a minimal cost; and
Loss of sales – your organization will lose customers over this lost of trust and future sales may also be affected if you did not adequately address the public relations aspect.
There are likely other events that will result from such an incident that will also cost your organization time and money. The bottom line is that this is something your organization should avoid at all costs because, in my experience, most organizations do not survive such an incident.
The title of this post is from a quote from a research paper published by Forrester titled ‘PCI Unleashed’. Some of the thoughts discussed by Forrester are so true; I thought I would share a few of them. If you have an opportunity, this is a paper well worth its cost.
One of the key points of the Forrester paper is that PCI was the result of a failure in corporate governance. Forrester correctly points out that had organizations focused on keeping cardholder data properly secured in the first place, the PCI DSS probably never would have existed.
I can confirm that corporate governance is a root cause from my own experiences. We talk a lot to organizations that think PCI is someone else’s problem and that their organization is being singled out.
In a lot of these organizations, security has been given the short shrift and has been perpetually on the back burner. In these organizations, senior management sees security, and IT as a whole, as a money pit that does nothing for the organization.
This is because senior management is ignorant to what IT and security brings to the table because they have hired security and IT leaders that are mice that are occasionally seen, but certainly never heard.
The card brands have never been shy about why they generated the PCI standards; it was to protect their brands. After all, when a breach occurs, it is almost always reported by the media as, “X number of Visa/MasterCard/American Express/Discover credit cards were disclosed by ABC Company.” The card brands are always called out first, followed by the company and, if the company is a franchise operation, sometimes the franchisee is named. The problem is that the general public typically only remembers the card brand names, sometimes the company name, and usually never remembers the name of the franchisee.
Want proof of this, just look at how badly TJ Maxx, HomeGoods, Marshalls and the like suffered after their parent, TJX Companies, incurred one of the largest breaches in history two years ago – NOT!
In a public relations coup, TJX Companies got the media to use TJX as the name of the breached organization which protected the brand names of their actual retail outlets. As a result, sales at their retail outlets were only slightly affected by the breach news. Another point that Forrester brings up is that naysayers point to the fact that PCI compliant companies have been hacked, therefore the PCI standard must not be effective. As I have argued time and again, PCI compliance is not a one time, annual thing. Compliance requires consistent execution of all of the PCI DSS requirements in order to remain compliant. Consistent execution is a struggle for even the most diligent organizations. It requires constant commitment by employees and management to the importance of doing security right all of the time. The problem is that we are all human and humans are fallible. So lapses will occur in any organization.
This is why all security frameworks are built on the concept of overlapping controls so that should a control go out of compliance, the whole house of cards does not come down. What differentiates a good organization from a bad organization is that a good organization does not have so many lapses at once that entire control structures fail.
If you read the reports from TrustWave and Verizon Business Services on breaches they have investigated, a significant portion of those breaches were the result of systemic failures of the control environment.
Related to the previous point is another good point made by Forrester that the PCI standards are just a baseline and that organizations must go beyond the PCI standards to be as secure as they can be. The PCI DSS is just the ante to be in the game. If you want to be certain, you need to go beyond what the PCI DSS requires.
Why? Because new flaws are discovered in software or new techniques are developed that make prior security methods obsolete or no longer as effective. As a result, your security systems must adapt or new security methods need to be developed to detect and circumvent these new threats.
The PCI standards may address these new threats in a future version, but it is your organization’s responsibility to deal with them first. This is also why most security experts say that security is a journey, not a destination.
One point of contention I have with this report is that they state, “Companies that already have robust security policies, processes, and technology do not have difficulty with PCI.” Having worked with a number of organizations that meet these criteria, I can attest that this is not necessarily the case.
A lot of them have very robust security policies, processes and technologies; however, the day-to-day consistent execution was haphazard at best. Management believed that they were in pretty good shape for meeting the PCI standards. As we pealed the onion on their security environment, it became obvious that all management was seeing was a facade of security.
Security people said all of the right things and their policies and procedures said all of the right things, but the actual execution was not even close to consistent. As they like to say, “They talk the talk, but they do not walk the walk.”
As a result, these organizations have struggled to change ingrained cultural issues and “bad habits” that can be much tougher to deal with than implementing new policies or technologies.
Finally, the next time you hear someone say that they feel PCI standards are not fair or they are impossible to comply with, ask them if they think they can afford a breach? The best tidbit offered by this Forrester report is their estimated cost per account of a breach. Forrester estimates that a breach costs between $90 and $300 per account breached, excluding lawsuits and any remediation efforts.
A modest breach of say 100 accounts carries an estimated cost of $9,000 to $30,000, excluding:
Legal representation – you know that your organization will be sued or threatened to be sued over a breach and that will require your lawyers to go into action. If you think lawyers are cheap, thing again, particularly when they are fighting a lot of battles;
Public relations – just as in politics, your organization will have to put the best face on such an incident. If your organization does not, the media will provide that “face” without you and it likely will not be a “good face”;
Investigation – the card brands will require a forensic examination to be performed. If you think lawyers are expensive, the costs related to a forensic examination will make you believe your lawyers are cheap;
Remediation – there will be changes required to better ensure security and some of those changes will likely have a cost associated with them. Only the lucky get away with policy or procedural changes with a minimal cost; and
Loss of sales – your organization will lose customers over this lost of trust and future sales may also be affected if you did not adequately address the public relations aspect.
There are likely other events that will result from such an incident that will also cost your organization time and money. The bottom line is that this is something your organization should avoid at all costs because, in my experience, most organizations do not survive such an incident.
Wednesday, August 11, 2010
“I know what happened with the Wikileaks from Brad Manning because I was there. I’m the one who called the U.S. Government”
Contributed by The Cyber Jungle
A press briefing at DefCon, called to announce a non-governmental effort to fight crime and terrorism, took a surprising turn when the group’s director revealed that he was the person who arranged for former hacker Adrian Lamo to turn over leaked classified military documents to the U.S. government.
Chet Uber, director of Project Vigilant, traveled to DefCon18 in Las Vegas to recruit volunteers for the project from the ranks of DefCon attendees, a rich source of talent for his technically intricate mission — attributing crimes to their funders and perpetrators by monitoring internet traffic for “footprints in the digital sand.”
In the course of answering questions from reporters about Vigilant, Uber stopped talking, and then began to discuss his personal involvement with Lamo, whom he described as a friend of his, and who works for Uber as a Viglant volunteer. Uber said he wanted to “right a wrong,” referring to criticism of Lamo in the hacker community since his meeting with federal authorities.
“I know what happened with the Wikileaks from Brad Manning because I was there, I’m the one who called the U.S. government.”
In the file below, you can hear Uber’s description of how he persuaded Lamo to meet with the feds, turn over the documents, and reveal everything he knows. The audio file is six minutes long, and includes Uber’s account of his interactions with Lamo by phone during the meeting, when Lamo called him for encouragement. Download the full interview here.
A press briefing at DefCon, called to announce a non-governmental effort to fight crime and terrorism, took a surprising turn when the group’s director revealed that he was the person who arranged for former hacker Adrian Lamo to turn over leaked classified military documents to the U.S. government.
Chet Uber, director of Project Vigilant, traveled to DefCon18 in Las Vegas to recruit volunteers for the project from the ranks of DefCon attendees, a rich source of talent for his technically intricate mission — attributing crimes to their funders and perpetrators by monitoring internet traffic for “footprints in the digital sand.”
In the course of answering questions from reporters about Vigilant, Uber stopped talking, and then began to discuss his personal involvement with Lamo, whom he described as a friend of his, and who works for Uber as a Viglant volunteer. Uber said he wanted to “right a wrong,” referring to criticism of Lamo in the hacker community since his meeting with federal authorities.
“I know what happened with the Wikileaks from Brad Manning because I was there, I’m the one who called the U.S. government.”
In the file below, you can hear Uber’s description of how he persuaded Lamo to meet with the feds, turn over the documents, and reveal everything he knows. The audio file is six minutes long, and includes Uber’s account of his interactions with Lamo by phone during the meeting, when Lamo called him for encouragement. Download the full interview here.
Monday, August 9, 2010

PCI DSS and Code Reviews
Contributed By:
PCI Guru
Requirement 6.6 of the PCI DSS discusses the concept of code reviews or the implementation of an application firewall to protect Internet facing applications.
For code reviews, requirement 6.6 states:
“Reviewing public-facing web applications via manual or automated application vulnerability security assessment tools or methods, at least annually and after any changes”
The confusion regarding code reviews is exacerbated by the fact that most organizations have only read the PCI DSS and not the information supplements that further clarify the PCI DSS requirements.
In April 2008, the PCI SSC issued “Information Supplement: Requirement 6.6 Code Reviews and Application Firewalls Clarified.” Pages 2 and 3 go into detail regarding what the PCI SSC deems as appropriate for conducting code reviews.
The first thing that organizations get wrong about meeting 6.6 is conducting their application vulnerability assessment after the application is in production.
Typically, this is done to save time and money as most organizations are already conducting vulnerability scans and penetration testing to meet requirements 11.2 and 11.3. The supplement is very clear that this is not acceptable when it states:
“The reviews or assessments should be incorporated into the SDLC and performed prior to the application’s being deployed into the production environment. The SDLC must incorporate information security throughout, per Requirement 6.3.”
The supplement continues to state:
“… it is recommended that reviews and scans also be performed as early as possible in the development process.”
Further clarifications provided during QSA re-certification training indicates that the PCI SSC really believes that the reviews or assessments MUST be incorporated into the SDLC, not that they should be incorporated.
As a result, the PCI SSC is instructing QSAs to ensure that application vulnerability assessments are done before the application is placed into production and that any critical, high or severe vulnerabilities are addressed prior to the application entering production.
The idea being that applications should go into production
Code reviews can be done manually or using automated tools. However, if an organization is using one or more automated tools, the code review is not all about the tool.
There must be processes in place that address the vulnerabilities identified and those vulnerabilities that are critical, high or severe must be addressed prior to the application being placed into production. Most organizations conduct this sort of testing as part of their quality assurance process.
Tools such as IBM/Rational AppScan have the ability to integrate into the developer’s workbench and conduct vulnerability testing while the code is developed. However, while that ensures that specific code modules are secure, it does not ensure that all of the modules that make up the application are secure as a whole.
So a vulnerability scan of the completed application should be performed to ensure that the application as a whole is secure.
The next misunderstanding is related to having an “independent organization” conduct the code review. This has been interpreted as code reviews must be conducted by third party application assessors. The PCI SSC did not help this interpretation by their statement in the supplement when they stated:
“While the final sign-off/approval of the review/scan results must be done by an independent organization …”
However, the PCI SSC has indicated in QSA training that independent is defined as anyone not associated with the development of the code being reviewed.
A lot of organizations have a quality assurance group separate from their developers and so the quality assurance group is responsible for conducting the code reviews.
In organizations with very small IT organizations, as long as you have a developer that was not involved in developing the code being reviewed, they can be the independent party that conducts the code review.
Finally, code reviews are only required on code developed by the organization, not PABP or PA-DSS certified purchased software. However, if the purchased software is not PABP or PA-DSS certified, then the software must be assessed under PCI DSS requirements 6.3 through 6.6.
If the software vendor will not cooperate with such an assessment or provide a copy of their own PCI DSS assessment under requirements 6.3 through 6.6, those requirements must be judged as not in place on the organization’s PCI assessment.
Contributed By:
PCI Guru
Requirement 6.6 of the PCI DSS discusses the concept of code reviews or the implementation of an application firewall to protect Internet facing applications.
For code reviews, requirement 6.6 states:
“Reviewing public-facing web applications via manual or automated application vulnerability security assessment tools or methods, at least annually and after any changes”
The confusion regarding code reviews is exacerbated by the fact that most organizations have only read the PCI DSS and not the information supplements that further clarify the PCI DSS requirements.
In April 2008, the PCI SSC issued “Information Supplement: Requirement 6.6 Code Reviews and Application Firewalls Clarified.” Pages 2 and 3 go into detail regarding what the PCI SSC deems as appropriate for conducting code reviews.
The first thing that organizations get wrong about meeting 6.6 is conducting their application vulnerability assessment after the application is in production.
Typically, this is done to save time and money as most organizations are already conducting vulnerability scans and penetration testing to meet requirements 11.2 and 11.3. The supplement is very clear that this is not acceptable when it states:
“The reviews or assessments should be incorporated into the SDLC and performed prior to the application’s being deployed into the production environment. The SDLC must incorporate information security throughout, per Requirement 6.3.”
The supplement continues to state:
“… it is recommended that reviews and scans also be performed as early as possible in the development process.”
Further clarifications provided during QSA re-certification training indicates that the PCI SSC really believes that the reviews or assessments MUST be incorporated into the SDLC, not that they should be incorporated.
As a result, the PCI SSC is instructing QSAs to ensure that application vulnerability assessments are done before the application is placed into production and that any critical, high or severe vulnerabilities are addressed prior to the application entering production.
The idea being that applications should go into production
Code reviews can be done manually or using automated tools. However, if an organization is using one or more automated tools, the code review is not all about the tool.
There must be processes in place that address the vulnerabilities identified and those vulnerabilities that are critical, high or severe must be addressed prior to the application being placed into production. Most organizations conduct this sort of testing as part of their quality assurance process.
Tools such as IBM/Rational AppScan have the ability to integrate into the developer’s workbench and conduct vulnerability testing while the code is developed. However, while that ensures that specific code modules are secure, it does not ensure that all of the modules that make up the application are secure as a whole.
So a vulnerability scan of the completed application should be performed to ensure that the application as a whole is secure.
The next misunderstanding is related to having an “independent organization” conduct the code review. This has been interpreted as code reviews must be conducted by third party application assessors. The PCI SSC did not help this interpretation by their statement in the supplement when they stated:
“While the final sign-off/approval of the review/scan results must be done by an independent organization …”
However, the PCI SSC has indicated in QSA training that independent is defined as anyone not associated with the development of the code being reviewed.
A lot of organizations have a quality assurance group separate from their developers and so the quality assurance group is responsible for conducting the code reviews.
In organizations with very small IT organizations, as long as you have a developer that was not involved in developing the code being reviewed, they can be the independent party that conducts the code review.
Finally, code reviews are only required on code developed by the organization, not PABP or PA-DSS certified purchased software. However, if the purchased software is not PABP or PA-DSS certified, then the software must be assessed under PCI DSS requirements 6.3 through 6.6.
If the software vendor will not cooperate with such an assessment or provide a copy of their own PCI DSS assessment under requirements 6.3 through 6.6, those requirements must be judged as not in place on the organization’s PCI assessment.
Tuesday, August 3, 2010
ISO - It's a Bit Emotional

Contributed By:
Javvad Malik
A few days ago my wife gave birth to a baby boy. I remember my Dad telling me that when I was born he had to send a telegram to let his parents know the good(?) news. But times have changed, who needs a telegram when you have a mobile phone?
Like all other information security professionals, my life is based around ISO27001; therefore, in line with section 13.1 ‘reporting information security events & weaknesses’ I noted that I required a formal event reporting process. Luckily I was prepared and had the following process flow documented.
Turn on phone -> Find Dads number -> Dial -> wait for dad to answer -> give the news
Simple as can be!
However, something strange happened as I dialled the number. My fingers were a little more shaky than usual and my heart was racing. Mouth went dry, so dry in fact that when my Dad answered I could barely manage a croaking sound.
After several attempts and a quivering voice I managed to spill the news out, “congratulations… … … … it’s a boy” Well something like that. It sounded more like “croooak ccrrkk rrkkk boy” But he got the message in the end.
It’s funny how even the most rational persons body stops co-operating when things get emotional. Your decision making ability is impaired and simple things such as walking in a straight line become quite challenging.
Now this got me thinking about how big chiefs in organisations are supposed to act in the event of an incident. Yes, it’s all well and good having a documented process which says that if your entire customer database is compromised stating the steps you need to take.
But in reality if you’re the Chief Security Officer, then you’re under a lot of pressure. No ISO standard or certification will tell you how to deal with the emotional turbulence you’ll undergo so I’ve taken the time to break the process down for you:
1) Shock
This sets in immediately, but for most it doesn’t last long. Unless you’ve got a bad heart condition in which case this could spell the end. But it’s one of those situations where you think “surely, someone’s going to tell me it’s all a joke or there’s been a mistake”. Then you go on the BBC website which confirms your bluechip company has just managed to lose all of its customers financial details. You can’t speak a word, it isn’t a joke and it isn’t a simulation.
2) Bitterness
This one gets ugly. Here is where you’ll probably say a lot of things you’ll regret later. Like blaming the executives for not listening to your recommendations, or cursing the finance department for cutting your budgets. You may even resort to blaming your own team for their sheer incompetence. After all you’ve done for them, employing them, training them, the least they could do is do their job properly and not go to security conferences all year round. Expect lots of swearing.
3) Excuse-making
We only got compromised because our carbon footprint is too large and we had some tree-huggers infiltrate our organisation and compromise us from the inside. Or because I wasn’t allowed to go to Defcon this year the security gods were angered. Perhaps it was a covert CIA operation. Conspiracy theories will fly.
4) Despair
After 3 days at your desk despair begins to set in. Everything is horrible and nothing is good. Life has lost all meaning and purpose. This is the stage where most people consider dusting off their 30 year old CV, try and find out what LinkedIN is all about and eventually decide on moving to Spain to hire out mopeds to British tourists.
5) Acceptance
So you lost some data. Big deal. Every companies loses data and ultimately, with a bit of PR the whole situation can be turned around. You can finally get budget to implement cutting edge security controls and once your customers realise this, it will propel the companies share prices to nose bleeding heights never seen before. This stage will probably last around 30 years, and you’ll be telling that story as you remind people to return the moped with a full tank of petrol.
Tuesday, July 27, 2010

Contributed By:
Dejan Kosutic
Have you ever tried to convince your management to fund the implementation of information security? If you have, you probably know how it feels - they will ask you how much it costs, and if it sounds too expensive they will say no.
Actually, you shouldn't blame them - after all, their ultimate responsibility is profitability of the company. That means, their every decision is based on the balance between investment and benefit, or to put it in management's language - ROI (return on investment).
This means you have to do your homework first before trying to propose such an investment - think carefully how to present the benefits, using language the management will understand and will endorse.
I'll try to help you - the benefits of information security, especially the implementation of ISO 27001 are numerous. But in my experience, the following four are the most important:
1. Compliance
It might seem odd to list this as the first benefit, but it often shows the quickest "return on investment" - if an organization must comply to various regulations regarding data protection, privacy and IT governance (particularly if it is a financial, health or government organization), then ISO 27001 can bring in the methodology which enables to do it in the most efficient way.
2. Marketing edge
In a market which is more and more competitive, it is sometimes very difficult to find something that will differentiate you in the eyes of your customers. ISO 27001 could be indeed a unique selling point, especially if you handle clients' sensitive information.
3. Lowering the expenses
Information security is usually considered as a cost with no obvious financial gain. However, there is financial gain if you lower your expenses caused by incidents. You probably do have interruption in service, or occasional data leakage, or disgruntled employees. Or disgruntled former employees.
The truth is, there is still no methodology and/or technology to calculate how much money you could save if you prevented such incidents. But it always sounds good if you bring such cases to management's attention.
4. Putting your business in order
This one is probably the most underrated - if you are a company which has been growing sharply for the last few years, you might experience problems like - who has to decide what, who is responsible for certain information assets, who has to authorize access to information systems etc.
ISO 27001 is particularly good in sorting these things out - it will force you to define very precisely both the responsibilities and duties, and therefore strengthen your internal organization.
To conclude - ISO 27001 could bring in many benefits besides being just another certificate on your wall. In most cases, if you present those benefits in a clear way, the management will start listening to you.
Subscribe to:
Posts (Atom)