Technical testing has a long-standing legacy in PCI DSS. Even from the days of the old CISP and SDP standards, technical testing has been a critical component of demonstrating compliance. Over the years, testing has evolved from simple external scans to full penetration testing that requires testers to prod all layers of the OSI model. In PCI DSS 3.0, the Standard requires that you use an industry-accepted penetration-testing methodology, which could lead to confusion. They do cite NIST SP 800-115 as an example methodology and the Penetration Testing Guidance (March 2015) by the Council, which at least gives you a place to start. Penetration testing is not something that you should take lightly, and not something you should treat as a slightly more difficult vulnerability scan. True penetration testing almost always leads to a successful intrusion, and should target people and process as diligently as it targets technology.
PCI DSS 3.1 provided some fairly critical updates to version 3.0 that help to clarify the intent of the new penetration-testing requirements. Specifically, they wanted to clarify how segmentation would be tested, what needs to be included for the testing, and how to ensure blocking and alerting technology works to the best of its ability.
Since technical testing is in a couple of different requirements, we’re going to be jumping around a bit. This section primarily focuses on updates to Requirement 11.3 and 6.6.
This requirement evolved in two ways. First, the Council removed some language from the 11.3.2.a testing procedure. Next, they updated Requirement 11.3.4 to help clarify what systems must be tested for segmentation. Remember, as of PCI DSS 3.0, any declared “segmentation” in your environment must be validated as part of this process.
PCI DSS 3.1. DOI: http://dx.doi.org/10.1016/B978-0-12-804627-2.00004-7
© 2016 Elsevier Inc. All rights reserved.
20 PCI DSS 3.1
As you are going through your preparation for your annual assessment, consider reviewing the 11.3 language very carefully. Your QSA is going to be asking a few more questions this time around, specifically around your methodology and ensuring the report from your test aligns with that methodology. You will need to take networks and applications through this process, and you will probably have many more findings this year. In addition, while you are required to make sure that you test for any vulnerability listed in Requirement 6.5, be sure your methodology includes a run through the latest OWASP Top 10 as well.
This requirement seems to get more teeth with every iteration. When it was first introduced, it was effectively neutered by Requirement 11.3. Or Requirement 6.6 neutered Requirement 11.3. It’s all a matter of perspective. Essentially, companies could choose to (effectively) follow the penetration testing requirements from 11.3 for their web applications, or they could superfluously add a Web Application Firewall (WAF). WAFs have a place in your information security strategy, but it should not be hastily slapped in place based on Requirement 6.6. The current iteration in 3.1 is still a bit confusing. Here’s how you need to look at it. You must do one of these two things.
- You must take any public facing application through Requirement
- You must deploy a WAF in front of said public facing web application.
Am I oversimplifying things? Slightly. But the review of both requirements would be nearly identical (albeit with more specific guidance in 6.6).
If you choose to deploy a WAF, PCI DSS 3.1 closes another loophole with deployments that alert instead of block. The new update now requires that alerts are “immediately investigated.” The guidance uses weaker language and says that the response must be “timely.” You can guess that there will be interpretation variance on this. Here’s my guidance. If you are using a WAF, then you should tune it to be tied to an alerting system that is immediately followed up on. Or, better yet, set it to actively start blocking bad traffic. If you do either of these things poorly, I can assure you that it will be held over your head in the case of a breach.
Technical Testing 21
You should probably use a combination. I would argue the better strategy is to take it through Requirement 11.3 and ensure you fully meet that requirement. Then, for added measure, drop the WAF in place in alert mode, and develop a risk-based strategy for responding to the alerts. Ultimately, this meets the requirement without the WAF, but putting additional detection technologies in place may help you detect more attacks before they get serious. Check with your security teams to see what kind of technologies they recommend!
Future proofing against technical testing is painful, but it’s absolutely possible. Essentially, it means that you are altering your testing methodology and techniques to keep up with current hacking trends—including specific attacks against the technology in use at your firm. For example, if you decide to take on a mobile payments offering, you need to adjust your methodology to include the apps, devices, and infrastructure you stand up that is dedicated to this function. Keeping up with general application vulnerabilities as well as vulnerabilities rooted in your platform is a must, and being able to react to those changes as they pop up is critical. If you have not embraced faster moving IT deployment methods such as DevOps, it’s probably time to consider moving in that direction. Keep in mind, PCI DSS doesn’t keep up with the bad guys, so if you want to stay safe, you need to.
Other Miscellaneous Changes
There are a number of requirements that had minor updates in PCI DSS 3.1, but didn’t really group together as one of the major issues identified earlier. As you review the Summary of Changes from PCI DSS 3.0 to 3.1 document, spend some time looking through the first ten items on page 3. Those are items that are either general changes, changes specific to the various front-matter sections, or as the first item says, fixing a typo. I found a couple while writing the last book and sent them over to the team. It could be things like missing bullets, formatting, or other minor changes that don’t alter the meaning or interpretation of the standard itself. For the other nine mentioned, these are specific clarifications that are clearly pointed out for you to consider.
Storage and usage of sensitive authentication data is massively debated when it comes to scoping and performing PCI DSS assessments. The update in 3.1 puts to bed a long-standing quandary of where this requirement comes into play. There seems to have been a gap in the handoff between the first QSAs and the current user base. The requirement now states that the storage of sensitive authentication data (SAD) is not permitted “after authorization.” If your ability to process is temporarily suspended, you are permitted to perform a Store and Forward for any authorizations that are unable to process due to an outage.
Some advice for this requirement… Your goal, as a target for hackers, should be to remove any SAD immediately after it is used and ensure any payment applications comply with PA-DSS. If you need to do a preauthorization and then full authorization once you have the total amount, pre-authorize with all the SAD, and then do a full authorization without the SAD. After usage, securely scrub the stored data to ensure
PCI DSS 3.1. DOI: http://dx.doi.org/10.1016/B978-0-12-804627-2.00005-9 © 2016 Elsevier Inc. All rights reserved.
that it is completely removed from these systems. This is the same guidance in the big book as well as the same guidance you will find on many industry blogs (including mine).
The next step you should consider is doing some level of encryption so that a compromised POS controller would not impact the security of this data. There are a number of options available to you today. Those include acquirer-provided solutions, such as TransArmor from First Data, or P2PE-validated solutions, such as Bluefin’s PayConex. Whatever solution you choose, there are a few elements that you should consider in your solution:
- Does the solution use industry-accepted encryption algorithms?
- Does the solution return tokens to you so you are no longer handlingPAN data after the auth?
- Will you be EMV-enabled after the solution?
- Will you be able to accept Apple Pay and Samsung Pay after thesolution?
At a minimum, you should ensure the first three items on the list are checked off. Given the impending liability shift related to EMV deployments coming up in October of 2015, you want to be on the “enabled” side of that equation—especially since one of the larger retail banks (Chase) has finally started issuing chip cards. Many banks have multiple options for different security features.1 Apple Pay and Samsung Pay may not be needed or desired for your particular solution, but there are advantages to putting either or both in your environment. Keep in mind, there are still many other payment methods that we have not discussed that are outside the scope of PCI DSS. For those, you should work with your acquirer or processor to add those mechanisms to your particular processing options.
TESTING PROCEDURE 3.4.E
Keen observers now see a new testing procedure for Requirement 3.4 based on the note added in PCI DSS 2.0 about the issues with hashing and truncation. The new testing procedure requires QSA to “examine implemented controls to verify that the hashed and truncated versions cannot be correlated to reconstruct the original PAN.” Even though this note has been present as far back as PCI DSS 2.0, this is the first time we see a testing procedure to validate that an assessed entity is not doing this behavior. This testing procedure may trip you up as you go through your first 3.1 assessment in areas where you have ignored the note in the requirement.
Bad guys can create rainbow tables, or precomputed tables that contain a PAN and hashed value, in very little time with readily available technology. If you use a hash, but then also store a truncated version of the PAN (as an example, first six, last four), there are only so many variations of those middle six digits that will yield a valid PAN. Once you calculate those, building a hash table is just one added step. Now an attacker has essentially reversed your one-way hashing algorithm and has all the original PAN data.
Hashing, in general, is something that has fallen in and out of favor as a protection mechanism for cards. The reason why it was so attractive at first is because it is a one-way hash. In fact, I even wrote articles about it in the mid-2000s proclaiming it as a great solution for protecting card data. I still think it has great uses, but there is a catch in how we treat hashed data that is unique to these algorithms. Hashes are one-way algorithms—meaning, there is no mathematical way to go from a hash to the original value. This is unlike crypto routines where Alice takes plaintext, encrypts it using a key, and sends it to Bob to decrypt with his key. In this case, the message is hidden from the transport medium.
If we take the same analogy and apply it to hashing, Alice takes her plaintext and runs it through a hashing mechanism—potentially with a key, referred to as a salt—and sends it to Bob. Alice has effectively hidden the message from the transport medium as well as the recipient. Bob cannot convert that hash back into a plaintext value to read. To use an old phrase, you can turn oranges into orange juice, but you can’t turn orange juice back into oranges.
It’s for this reason that we run into problems. Encrypted data is protected by a key, but you don’t see companies leaving their encrypted data lying around in plain sight for anyone to take. Hashed data is another form of encrypted data, but because it cannot be mathematically reversed, many firms have been relaxed with the protection of this hashed data.
Now let’s say that Alice will only send Bob one of 1000 messages. Alice may want to signal Bob using one of those predetermined messages as a trigger that an event is going to happen. If Bob knows all of those messages and knows the key or salt, Bob can precompute all the possible hashes from the 1000 predetermined messages and match one to the hash from Alice. The transport medium cannot read the message, but Bob will ultimately know what Alice sent. Aside from the obviously nefarious uses for this, PANs can suffer the same fate. The difference between PAN data and the Alice and Bob messages is that we already know and can precompute every possible PAN—regardless if it is tied to an open account. So if a bad guy compromises some hashed data that includes truncated data, you shortcut the work he has to do to obtain an original PAN.
If you leverage hashing in your environment, it’s time to ensure you are not coupling that with any kind of truncation that could trip you up on this testing procedure.
This clarification is unfortunately necessary, but anyone who argued that SMS didn’t apply to this requirement was really kidding themselves. I could see an argument for SMS as part of that old carrier-grade exemption, but I don’t know how you would work around the GSM and CDMA notes in Requirement 4.1. If it smells like end-user messaging technologies, treat it like that. Think of all the other versions of end-user messaging that are not listed in the examples, such as Kik, Cyber Dust, Wickr, Snapchat (hah), What’sApp, Facebook Messenger, Slack, and even Twitter DMs. Keep your PANs away from anything like that.
REQUIREMENT 8.1.4 AND 8.2.4
Let’s tackle these two requirements together as the change is interrelated. In the 4th edition of PCI Compliance, we discussed how the ambiguity in PCI DSS is getting a bit out of whack. Words like “should” and “periodic” increased dramatically from PCI DSS 2.0 to 3.0. Another ambiguous term could be “at least,” I suppose. PCI DSS clarified Requirement 8.1.4 to remove the word “at least” in favor of the word “within,” and Requirement 8.2.4 to add the word “once” after the phrase “at least.” My only advice to those of you out there who feel wronged by this change is to stop trying to out-smart the requirements. Your efforts would be better off spent trying to reduce your scope through end-to-end encryption or outsourcing than it will through bickering about a few words that might get you an extra few weeks out of a password change.
This one is such a minor change that I feel like this was clearly an issue identified within a few weeks of the original 3.0 publication. There are two chances to this requirement. The first comes in the 9.2.a testing procedure where the word “new” was removed from the first (actual) bullet. There is a formatting mistake corrected right above it, so we’ll assume that the first bullet in 3.1 is what was intended as the first bullet for 3.0. I can see how the word “new” would trip folks up who are reading in to the words literally, but the requirement is identical between 3.0 and 3.1. Just the testing procedures changed. In addition, the Council combined testing procedures 9.2.b and 9.2.d to account for a redundancy issue.
TESTING PROCEDURE 9.9.1.B
This is another minor wording change to more clearly state what is being asked in the testing procedure. The intent of the testing procedure was to ensure a QSA was looking at both the location of the device as well as the device itself when considering the accuracy of the list. Fairly simple clarification, but we’re going to spend a few paragraphs here talking about the challenges with Requirement 9.9 and what you need to be doing to address them.
Requirement 9.9 is designed to add a layer of protection on one of the most visible parts of the cardholder data environment—the payment terminal. Over the last several years, these devices have been under increasing scrutiny by both attackers and the payment card industry at large. Terminals are subject to standards that have evolved specifically to add controls to the terminal itself through physical and electronic hardening. Since these devices are the first interaction with a payment card, the right kind of compromise could yield criminals with thousand PANs.
Terminals can be vulnerable to tampering. A determined attacker with access to dummy terminals may find a weakness that allows them to place skimming devices into the terminal itself. Examples where this has happened in the past include attackers drilling small holes into terminals to expose parts of the electronics and tiny skimmers placed inside or around the terminal housing. In order to combat some of the more blatant attempts to tamper with terminals, the PCI Security Standards Council added a new requirement to version 3.0 of PCI DSS—now in effect as of July, 2015.
PCI DSS gives merchants the flexibility to use a number of different methods to comply with any of its requirements. At any given time, you should have full records that include the make and model of the device, its specific location, the serial number of the device, and specific details on how this terminal has been inspected for possible tampering or substitution with a record of all of those inspections and findings. Additional items you may want to record include its projected lifespan, contacts for support, replacement costs, and potential upgrade paths. All of those items together, provided they are consistently kept up to date with a relatively short period of inspection time (such as weekly) demonstrates to an auditor or assessor that you are fully meeting this requirement.
For a small merchant, the case for a manual or automated compliance tool comes down to the manager or owner’s ability to handle a mandated routine process. Evidence of missed inspection points or missing records will definitely point to a noncompliance issue—especially in the aftermath of a breach. For a small merchant, this may be enough to determine cause in the breach, which would eventually lead to fines. For managers or owners who have either multiple locations and/or larger footprints with multiple terminals (including multiple terminal brands), compliance most likely requires some level of automation or a technology tool. It is possible to do manually, but it’s impractical with the intensity of manual work required to fully comply with these requirements. Given that manual, repetitive work may lead to apathy in the individuals performing the tasks, large merchants should consider using a tool to help detect any type of tampering or alterations of the device. In addition, many individuals may not be qualified to properly look for tampering in terminals if they have not had recent briefings on current attack trends.
Modern payment terminals come with countermeasures to help detect tampering and can even disable themselves or send alert codes to anyone listening that there is a problem. Criminals are creative— physical inspection is still paramount to detecting skimmers.
Given our challenges with skimming, is Requirement 9.9 enough to keep you safe? Let us not forget that changes to PCI DSS are reactive and slow. The bad guys know exactly what is required to meet the standard, and thanks to the slow adoption of PCI DSS changes, they have plenty of lead time to figure out how to get around the requirements in PCI DSS. Even the Council will tell you that compliance with PCI DSS is not enough to prevent a breach and should not be the high water mark by which you build your security and compliance program. Thus, while compliance with PCI DSS is required, it should be done so as a by-product of a solid information security program.
If your business is in an area prone to attacks on terminals, or it is one that uses unattended payment terminals such as automated fuel dispensers, Requirement 9.9 is not enough to keep those points of interaction safe. For additional protection, consider the following where it is reasonably possible to do with minimal business impact:
- Ensure that any locks on unattended payment terminals are changed and not commonly keyed.
- Understand the inner workings of each technology and follow their implementation guides to the letter.
- Routinely inspect the devices for tampering (for instance, at every shift change).
- Place cameras around these devices so that you can capture individuals tampering with them, but not in a way that would capture a customer’s PIN.
- Ensure you receive vendor updates to include security patches and alerts to exploits that defeat antitampering controls.
- Reward employees for spotting and alerting management to suspicious behavior.
- Set reasonable technology lifespans for terminals and replace at their end of life.
- Be a maniacal record keeper.
- Fully understand the controls surrounding your terminals, where you need to augment them, and exactly what liabilities may sit at your feet in the case of a breach (as a hint, you won’t be able to blame that third-party reseller or integrator).
Hopefully this extended review of the effects of Requirement 9.9 helps to illustrate the significant amount of work required to meet this requirement.
The changes to this requirement are pretty minor. If you put versions 3.0 and 3.1 side by side, you will see a few edits, but nothing really substantial. The official wording is that the requirement was altered to “more clearly differentiate intent from Requirement 10.6.2.” Just keep in mind that you need to actually review your security logs, pay attention to the alerts that your systems are generating, and allocate proper resources to manage all the events that your information security systems will generate.
The Council struggles with the demands from the community when it comes to defining terms. At times, the Council does not want to add examples because it may cause a revision in the standard (e.g., the SSL issue we’ve been over) or because it may cause a QSA to become hyper focused on only those examples. In the case of Requirement 11.5, the Council added some examples of what “unauthorized modification(s)” might include. The examples they added are “changes, additions, and deletions.” Or, in reality, all the things that any proper file integrity monitoring solution will check for and alert.
As a QSA, one of my favorite things to do was to review a merchant’s risk assessment process and final report. I enjoyed it because you could quickly determine how painful the assessment process was going to be based on how well they handled their internal risk analysis. For the most part, merchants were well intentioned in their risk assessments. What they lacked was consistency, methodologies for determining risk in a repeatable manner, accurate scope, and recency. The requirement says at least annually and after significant changes to the environment. What constitutes a significant change is always up for debate, but risk assessments were usually not performed often enough.
For every decent attempt at a risk assessment, there were people going through the motions for the sole purpose of trying to meet a compliance requirement. Good QSAs caught this and would question the validity of the assessment. Poor ones just looked at the cover page and checked the box. I often wonder if merchants put as much effort into risk assessments as they do trying to get out of meeting PCI requirements; breaches would be dramatically lower.
For PCI DSS 3.1, you now must ensure your risk assessment results in a “formal, documented analysis of risk.” I imagine that there are managers shuddering after reading that. Once it’s on paper, your firm may be obligated to something about the risk—including disclosing it. If you are doing good risk assessments today, not much should change for you. If you’ve been avoiding formal documentation, it’s time to get with your legal and audit teams to figure out what you need to do and how you need to do it. Don’t expect them to define the methodology for you, but they may have certain things in place that you can leverage.
This section included a hodgepodge of other changes in the standard that didn’t really fit into the other chapters we called out, and really couldn’t stand on their own as a chapter. Remember, you need to review your latest assessment report and go side-by-side with the new requirements to see what might need to change. Be sure to note those items and add them as new findings. Then, work to close them so when you do your first 3.1 validation it goes as smoothly as possible.