Non-formula data security solution provides greater protection against cyber-criminals.

Tech2Tech

Applied Solutions

Tokenization on the Node

Non-formula data security solution provides greater protection against cyber-criminals.

Data security is the ultimate game of cat and mouse—organizations invest countless resources researching, developing and producing top-of- the-line security platforms for today’s problems, but cyber-criminals have tomorrow’s solution. Ultimately, there is no single silver bullet for keeping databases protected, but a technology at the forefront of innovation is tokenization. Recent advances have made it a legitimate alternative to encryption for the enterprise data warehouse.

Better Protection, More Transparency

At a basic level, tokenization is based on randomness. Unlike encryption, which uses a potentially vulnerable mathematical algorithm, tokenization is completely random once the data is given alias values. Cyber-criminals can decode the data only by obtaining the token, then breaking into the tokenization server and accessing the protected lookup tables. Even if cyber-criminals hack into a system, they will not be able to do anything with the data tokens unless they also have the lookup tables.

DEFINITION

Tokenization is a process of protecting sensitive data by replacing it with alias values or tokens that are meaningless to someone who gains unauthorized access to the information.

Tokenization has remarkable flexibility. Since a token, by definition, will look like the original data value, it can travel inside application databases and components without modifications, resulting in greatly increased 0transparency. This reduces remediation costs to applications, databases and other areas where sensitive data lives because the tokenized information will match the original’s type, length and other properties.

The tokenization process can be applied to payments, such as payment card industry transactions; healthcare-related sensitive data, including Health Insurance Portability and Accountability Act information; and personally identifiable information (PII) such as email addresses, phone numbers and dates of birth. When factoring in compliance costs, tokenization provides a more cost-effective solution than other security options. That’s because tokenization provides better separation between the data and the security system. This can take the sensitive data out of scope, eliminating the requirement and costs to meet compliance. For organizations that need to have the actual data stored within their company, tokenization greatly reduces the extent and use of sensitive information, dramatically reducing compliance costs.

Reasons to Tokenize

  • Increased security. Several layers of random tables are used instead of an algorithm, so there’s no risk of having the algorithm decoded. Even if cyber-criminals obtain a token, it’s worthless unless they also have access to the token lookup table that is stored in a separate system.
  • More transparency. A token will look like the original data value in type, length and other characteristics. This enables the token to be used by applications, databases and other components without modifications, resulting in increased transparency.
  • Lower software and hardware costs. Increased transparency reduces remediation costs because the tokenized data will match the properties of the original data.
  • Data versatility. More than one type of data can be converted and any number of data categories, including credit card numbers and medical records, can be tokenized without increasing the token footprint.
  • Reduced PCI costs. Under certain conditions, tokenization takes sensitive data out of scope, meaning organizations that use the technology are required to perform payment card industry (PCI) audits on fewer systems.

Next-Generation ‘Small Footprint’ Tokenization

First-generation tokenization solutions are characterized by their use of large token servers that constantly grow with encrypted data. The next generation of tokenization is defined by a small system footprint that compresses and normalizes the randomized data. This implementation can be fully distributed and does not require any resource-intensive data replication or synchronization. Users benefit from a lower-cost, less-complex, higher-performing data security solution that guarantees no collisions.

Small-footprint tokenization offers advantages over first-generation methods:

  • Latency. Token servers with small footprints enable the distribution of the tokenization process so that operations can be executed in parallel and closer to the data. Therefore, latency is eliminated or greatly reduced, depending on the deployment approach used.
  • Replication. The smaller footprint allows the creation of farms of token servers that are based on inexpensive commodity hardware. This permits any required scaling without the need for complex or expensive replication processes, which eliminates collisions.
  • Versatility. Any number of data categories, including credit card numbers and medical records, can be tokenized without the challenges that previous tokenization methods impose. More data types can also benefit from the transparent properties that tokens offer. Tokenization can apply to all types of enterprises and is an especially good fit for the Teradata Database. When deployed on the node, small-footprint tokenization can be fully distributed across nodes and AMPs on a Teradata system.

Six Tokenization Myths

There are many misconceptions about tokenization. Here are six pervasive myths and the facts to dispel them:

Myth 1: It is a new data securit y solution.
Tokenization was first introduced in 1300 by the Vatican and reintroduced in 1998 by IBM, so it’s a technology that has been around for a long time. However, it has recently become a subject of widespread interest by the financial service industry, and the Payment Card Industry Standards Security Council is expected to release guidelines on tokenization this year.

Myth 2: It is only for credit card data.
Tokenization is a viable solution for multiple types of sensitive data, including credit card numbers, social security numbers, email addresses, etc. Tokenization limits the exposure of sensitive information, minimizing the chance that a hacker will obtain any useful data.

Myth 3: Encryption isn’t used.
Sensitive data is encrypted in Compliance 3 when “resting” as a preventive measure in case the data center is hacked. Encryption is used on the token server to protect the sensitive data.

Myth 4: The need for PCI compliance is eliminated.
For most organizations, tokenization can reduce payment card industry (PCI) scope and/or the need for compliance. For example, a small organization may never need to store sensitive data on its servers because a token can provide sufficient payment information to complete a transaction without the need for the original data. For large retailers and financial institutions, tokenization would narrow the PCI scope, but they would still need to handle the original data, which would necessitate compliance on some systems.

Myth 5: Tokenization is always the best solution.
The innovative technology will change how people think about safeguarding sensitive data. It’s an excellent solution for protecting credit card numbers, Social Security numbers, health records and personal information such as email addresses, passwords and birth dates. However, encryption is a better option when protecting files such as scanned documents and image and sound files.

Myth 6: It creates collisions.
Token collisions, also known as replications, have been a major concern. Current tokenization solutions have been tested and proved to fully eliminate the risk of collisions.

Data Protection for Today and Tomorrow

With its transparency characteristics and ability to protect sensitive information, tokenization will become a more popular security solution because of standards by the Payment Card Industry Standard Security Council and data protection requirements that go beyond payment data. Standards will ultimately help set best practices and govern tokenization implementation.

While tokenization doesn’t solve every data security conundrum, it will protect data today and for many years to come, since there is no encryption algorithm or key that must be refreshed periodically to prevent the growing threats of malicious attacks.


Your Comment:
  
Your Rating:

Comments
Fuzzy Logix