Marketing and Business Development

Robert Eve

Subscribe to Robert Eve: eMailAlertsEmail Alerts
Get Robert Eve: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Cloud Computing, Virtualization Magazine, Desktop Virtualization Journal

Article

Hybrid Solutions for Data Virtualization and Enterprise Data Warehouses

Eight best practices – Part 2

Weblogic at Cloud Expo

The intersection of data virtualization and enterprise data warehouses represents corporate best practices for delivering the rich data assets available in the enterprise data warehouse with the myriad sources of data now available outside the data warehouse.

In Part Two of this two-part series, I will target improving data warehouse efficiency by showing four best practices where data virtualization, used alongside data warehouses, saves time and money.

Part One examined ways that data virtualization improves data warehouse effectiveness.

4. Data Warehouse Hub and Virtual Data Mart Spoke
In a common scenario in today's enterprises, a central data warehouse hub is surrounded by satellite data marts in the same way that spokes surround a hub. These marts typically contain a subset of the warehouse data and are used by a subset of the users. These marts are often created because analytic tools require data in a different form than the warehouse. On the other hand, they may be created to work around warehouse controls. Regardless of their origination, each additional mart adds cost and potentially compromises data quality.

IT teams use data virtualization to create virtual data marts to eliminate and/or reduce physical data marts. This approach abstracts warehouse data to meet specific consuming tool and user query requirements, while preserving the quality and controls of the data warehouse.

In the integration pattern shown in Figure 1, the data virtualization middleware hosts virtual data marts that logically abstract and serve specific analytical reporting requirements.

Industry Examples
A mutual fund company uses data virtualization to enable its 150-plus financial analysts to build portfolio analysis models leveraging a wide range of equity financial data from a ten-terabyte financial research data warehouse with MATLAB and similar analysis tools. Prior to introducing data virtualization, the financial analysts frequently created satellite data marts for every new project. Now the IT team offers virtual data marts with a set of robust, reusable views that directly access the financial data warehouse on demand. Analysts now spend more time on analysis and less on accessing data, thereby improving their portfolio returns. The IT team has also eliminated extra, unneeded marts and their maintenance/operating costs.

An energy and oil company uses data virtualization to provide oil well platform data from a central Netezza data warehouse to engineers, maintenance managers and business analysts. This data is optimally formatted for a wide range of specialized analysis tools including Business Objects, Excel, Tibco Spotfire, Matrikon ProcessNet and Microsoft Reporting. The IT team quickly builds virtual views and services, thus rapidly responding to new, ad hoc queries. Analysts now leverage the warehouse with its virtual data marts as the single source of truth rather that replicating the data in local, "rogue" data marts.

6. ETL Pre-Processing
Extract, Transform and Load (ETL) tools, leveraging virtual views and data services as input to their batch processes, appear as another data source. This best practice also integrates data source types that ETL tools cannot easily access, as well as reuses existing views and services, saving time and costs. Further, these abstractions do not require ETL developers to understand the structure of, or interact directly with, actual data sources, significantly simplifying their work and reducing time to solution.

In the integration pattern shown in Figure 2, the data virtualization middleware complements ETL by providing access, abstraction and federation of packaged applications and web services data sources.

Industry Example
An energy company wanted to include SAP financial data in its enterprise data warehouse along with other sources and content. However, its ETL tools alone were unable to decode the SAP R/3 FICO complex data model. The IT team used data virtualization to access the SAP R/3 FICO data, abstract it into a form more appropriate for the warehouse, and stage it virtually for the ETL tools. With more complete and timely financial data in the data warehouse, the company can now better manage its financial performance.

7. Data Warehouse Prototyping
Building data warehouses from scratch is time-consuming and complex, requiring significant design, development and deployment efforts. One of the biggest issues early in a warehouse's lifecycle is frequently changing schemas. This change process requires modification of both the ETL scripts and physical data and typically becomes a bottleneck that slows new warehouse deployments. Nor does this problem go away later in the lifecycle; it merely lessens as the pace of change slows.

In the integration pattern shown in Figures 3 and 4, the data virtualization middleware serves as the prototype development environment for a new data warehouse. In this prototype stage, a virtual data warehouse is built, saving the time required to build a physical warehouse. This virtual warehouse includes a full schema that is easy to iterate as well as a complete functional testing environment.

Once the actual warehouse is deployed, the views and data services built during the prototype stage retain their value and prove useful for prototyping and testing subsequent warehouse schema changes that arise as business needs or underlying data sources change.

Industry Example
By using data virtualization during the prototyping stage, a government agency has speeded up the ETL and warehouse development process by four times. This result remains consistent during subsequent translations of working views into ETL scripts and physical warehouse schemas.

8. Data Warehouse Migration
Enterprises migrate their data warehouses for a number of reasons, including saving costs and reducing total cost of ownership. Another is mergers and acquisitions. In these scenarios, duplicate data warehouses need to be rationalized. A third reason is standardization. When an enterprise or government agency wants to rationalize different warehouses based on disparate technology platforms, they may migrate to a standard platform.

Regardless of the reason, in every case the reporting and analysis supported by the migrating data warehouse must continue to run seamlessly.

Data virtualization removes reporting risk by inserting a virtual reporting layer between the warehouse and the reporting systems. Decoupling these systems enables the reporting to continue before, during and after the migration. The integration patterns shown in Figure 5, Figure 6, and Figure 7 depict the original state prior to migration, the modification to reports via the virtualization reporting layer, and the stage where the new warehouse and supporting ETL are brought online and the old data warehouse is retired. The virtual layer insulates reporting users by enabling a controlled migration of the reporting views. Each existing view can be duplicated, modified to point at the new warehouse, and tested before the actual cutover, thereby insulating the reporting users from undue risk. Further, the virtual reporting layer is easily extensible for adding more sources or supporting new reporting solutions.

Industry Example
To reduce total cost of ownership when migrating to a data warehouse appliance, a large technology company used data virtualization to decouple its reporting from the data warehouse. Significant data warehousing cost reductions were achieved and reporting successfully migrated without interruption.

Conclusion
The articles in this two-part series have examined the eight best practices for using hybrid solutions of data virtualization and enterprise data warehouses to deliver the most comprehensive information to decision-makers. The intersection of the virtual and the physical data warehouse is aiding forward-thinking enterprises that must deal with the proliferation of data sources, including many web-based and cloud computing sources outside traditional enterprise data warehouses. Enterprises and government agencies that can learn from and adapt these best practices to their own enterprise information architectures will be best prepared to handle the continuous deluge of data found in most enterprise information systems today.

More Stories By Robert Eve

Robert "Bob" Eve is vice president of marketing at Composite Software. Prior to joining Composite, he held executive-level marketing and business development roles at several other enterprise software companies. At Informatica and Mercury Interactive, he helped penetrate new segments in his role as the vice president of Market Development. Bob ran Marketing and Alliances at Kintana (acquired by Mercury Interactive in 2003) where he defined the IT Governance category. As vice president of Alliances at PeopleSoft, Bob was responsible for more than 300 partners and 100 staff members. Bob has an MS in management from MIT and a BS in business administration with honors from University of California, Berkeley. He is a frequent contributor to publications including SYS-CON's SOA World Magazine and Virtualization Journal.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.