Reduce, Reuse, Recycle

Organizations can realize new value from existing data.

Companies are increasingly challenged to efficiently exploit their resources to rationalize and enable new business capabilities, opportunities, initiatives, applications and projects. Typically these organizations rely upon business case development and traditional project analysis techniques to help prioritize where resources will be used.

Sophisticated business impact models are developed to isolate the incremental effect of investment, essentially creating a business case analysis. Often, the analysis includes inputs such as the number of resources and cost required to implement, revenue potential, expense savings, risk mitigation and option costs.

These are supported by traditional measures such as return on investment (ROI), net present value, risk-adjusted ROI, payback period and time to market. While these analytic techniques represent sound measures, applying them to information management presents a glaring problem.

All too often, business leaders view information management in the context of isolated information delivery capabilities—or applications—such as analytic and report requirements. When determining whether a project is a good investment, these leaders focus on traditional project management and business impact modeling techniques to isolate incremental impact and force applications to stand on their own.

In isolation, these applications may pass financial scrutiny, but in aggregate they may slowly destroy shareholder value. Likewise, when considered separately, an application may not pass financial scrutiny, but when leveraged against the efforts of other projects, it may be viewed as feasible.

Paradigm shift


To assist organizations as they decide whether to implement new applications, Teradata has employed a new patent-pending analytic technique called Teradata Data Overlap Analysis. This scientific approach delivers to business and IT credible insights into real data leverage and reuse opportunities for seemingly disparate business initiatives. The acquired knowledge can help organizations create a springboard for new, incremental initiatives.

The way an organization handles its data directly affects the preconceived and actual costs of the project. Some organizations are entrenched with the idea that data must be moved to applications rather than moving applications to a common data infrastructure. The result is data redundancy, higher aggregate cost of managing information assets, increased data security exposure, and cumulatively greater data and information complexity. Exacerbating the problem is the fact that many one-off applications, such as divisional-performance score cards, occur under the radar of enterprise scrutiny, resulting in higher costs, greater efforts and increased time to market for new applications.

Companies that learn to leverage their existing data can enjoy a dramatically different business case dynamic with lower total application costs and a greater value realized faster. In this significant paradigm shift, not only are new applications calculated at the margin rather than at their isolated total cost but their time to market can be significantly decreased.

Overlap analysis


Teradata Data Overlap Analysis changes how an organization regards and values its information assets. Each application is considered incrementally and on its own merits in terms of its added financial value. However, new applications are analyzed in the context of data leverage and reuse for their cost and effort. This means that seemingly disparate projects are rationalized in the context of data commonality and leverage.

For example, in banking, anti-money-laundering (AML) guidelines mandate a "know your customer" doctrine. This requires that the organization demonstrates knowledge of where its customers live, what forms of identification they used to validate their identities, reconciliation of their demographics across products, validation that they are not on a watch list, understanding of their normal transaction behaviors in order to detect suspicious activity, and so on. Since much of this data is also used for marketing and credit-risk analysis, maintaining the data in a common repository makes sense from business and IT perspectives of cost, consistency and accuracy.

If the AML compliance initiative of a particular bank were held in isolation, the cost of compiling this data would have to be reflected solely in the AML business case in order for the application to stand on its own financial merits.

Now consider that the marketing department at that bank desires a stand-alone application to support event-based marketing (EBM). EBM uses much of the same data that is required by the AML effort, but because the EBM initiative is in isolation, and therefore doesn't share its data, it would also have to take into account the cost of compiling and integrating this data.

Figure 1 denotes the hypothetical bank's AML and EBM applications, each with two sub-initiative categories. When taken holistically and sequentially (first AML, then EBM), it is evident that a certain percentage of data in the AML categories is available for the EBM application.

For instance, if AML in the second column were fully enabled, then 73% of the data required to enable EBM in the fourth row would already be available. Since AML and EBM share data, a percentage of the data compilation and integration cost should be effectively shared by these two initiatives. This can significantly affect the funding model by introducing more equitable cost apportionment for "shared data." Without data overlap insight, data may appear "free" for EBM; hence, over-burden AML costs and under-burden EBM costs within respective business cases.

Challenge conventional wisdom


Data leverage is powerful—but because it is not clearly understood, it is rarely considered when developing new applications and forecasting their aggregate development impact. Traditional data movement to applications is often used instead, but it is only a one-way street and results in increasing data complexity and excessive cumulative costs—and anxious shareholders.

Without integrated data, each new application project must be considered and built from the ground up. First, new computing equipment is sourced, and database and network connections set up. Then the data model is established and new extract, transform and load processes created. The data from source systems is loaded and integrated, and users are trained and access rights granted. In the end, all of this entails a significant effort that drives up the cost and lengthens the project's time to value. Using data that is not integrated forces the business case for this project to stand on its own.

When data that is already in the data warehouse is leveraged or extended to support additional applications, the rules used to estimate project costs and build the business case change dramatically. The time, cost and effort associated with many of the setup tasks indicated above do not have to be incurred again, because they can be leveraged from other applications. The data model does not have to be re-established, only extended. The data does not have to be re-sourced, only expanded upon. And the list goes on. (See figure 2)

“Companies that learn to leverage their existing data can enjoy a dramatically different business case dynamic with lower total application costs and a greater value realized faster.”

In the bank example, the result is a promising business case for a new EBM application. (See figure 3.) While a nine-month payback using non-integrated data may have deemed the project not financially feasible, a four-month payback made possible with integrated data can make the project worthwhile and may turn into a new source of revenue for the company.

So instead of a project being considered and initiated from scratch, a holistic analysis can be performed with integrated data to determine data leverage opportunities, cost takeout, revenue enhancement, risk mitigation, execution difficulty and so on. The findings ensure that an application's data requirements consider the shareholders' perspective of investment leverage. After all, more and more shareholders demand that a company's assets (including information assets) be managed efficiently and effectively.

Intriguing and feasible projects

Data leverage provides companies a significant opportunity to change the way they approach new application development and project justification. By leveraging efforts and existing data, companies can reduce the amount of time, effort and cost associated with developing their next application—and potentially take on more projects using the same level of resources.

The concept and power of leveraging data can dramatically change the outcome of a business case analysis. A project that was considered intriguing but dismissed as unfeasible and not achieving required financial goals may now be viewed as feasible. Additionally, applications that can benefit the company can be enabled faster, yielding business benefits sooner.

Teradata Data Overlap Analysis can unlock the power of data leverage by helping companies reduce their application development costs, lower time to market for new applications and enable more business improvement opportunities. The result is a streamlined business that is better aligned with and accountable to shareholder desires to optimize investment.