RAID – redundant array of independent disks

Originally, the term RAID stood for “redundant array of inexpensive disks,” but now it usually refers to a “redundant array of independent disks.” While older storage devices used only one disk drive to store data, RAID storage uses multiple disks in order to provide fault tolerance, to improve overall performance, and to increase storage capacity in a system.

How RAID Works

With RAID technology, data can be mirrored on one or more other disks in the same array, so that if one disk fails, the data is preserved. Thanks to a technique known as “striping,” RAID also offers the option of reading or writing to more than one disk at the same time in order to improve performance. In this arrangement, sequential data is broken into segments which are sent to the various disks in the array, speeding up throughput. Also, because a RAID array uses multiple disks that appear to be a single device, it can often provide more storage capacity than a single disk.

RAID Levels

RAID devices use many different architectures, depending on the desired balance between performance and fault tolerance. These architectures are called “levels.” Standard RAID levels include the following:

Level 0: striped disk array without fault tolerance
Level 1: mirroring and duplexing
Level 2: error-correcting coding
Level 3: bit-interleaved parity
Level 4: dedicated parity drive
Level 5: block interleaved distributed parity
Level 6: independent data disks with double parity
Level 10: a stripe of mirrors

Some devices use more than one level in a hybrid or nested arrangement, and some vendors also offer non-standard proprietary RAID levels.

RAID History and Alternatives

Before RAID devices became popular, most systems used a single drive to store data. This arrangement is sometimes referred to as a single large expensive disk or SLED. However, SLEDs have some drawbacks. First, they can create I/O bottlenecks because the data cannot be read from the disk quickly enough to keep up with the other components in a system, particularly the processor. Second, if a SLED fails, all the data is lost unless it has been recently backed up onto another disk or tape.

In 1987, three University of California, Berkeley, researchers — David Patterson, Garth A. Gibson, and Randy Katz — first defined the term RAID in a paper titled A Case for Redundant Arrays of Inexpensive Disks (RAID). They theorized that spreading data across multiple drives could improve system performance, lower costs and reduce power consumption while avoiding the potential reliability problems inherent in using inexpensive, and less reliable, disks. The paper also described the five original RAID levels.

Today, RAID technology is nearly ubiquitous among enterprise storage devices and is also found in many high-capacity consumer storage devices. However, some non-RAID storage options do exist. One alternative is JBOD, short for “Just a Bunch of Drives.” JBOD architecture utilizes multiple disks, but each disk in the device is addressed separately. JBOD provides increased storage capacity versus a single disk, but doesn’t offer the same fault tolerance and performance benefits as RAID devices.

Another RAID alternative is concatenation or spanning. This is the practice of combining multiple disk drives so that they appear to be a single drive. Spanning increases the storage capacity of a drive; however, as with JBOD, spanning does not provide reliability or speed benefits.

RAID should not be confused with data backup. Although some RAID levels do provide redundancy, experts advise utilizing a separate storage system for backup and disaster recovery purposes.

Implementing RAID

In order to set up a RAID array, you’ll need a group of disk drives and either a software or a hardware controller. Software RAID runs directly on a server, utilizing server resources. As a result, it may cause some applications to run more slowly. Most server operating systems include some built-in RAID management capabilities.

You can also set up your own RAID array by adding a RAID controller to a server or a desktop PC. The RAID controller runs essentially the same software, but it uses its own processor instead of the system’s CPU. Some less expensive “fake RAID” controllers provide RAID management software but don’t have a separate processor.

Alternatively, you can purchase a pre-built RAID array from a storage vendor. These appliances generally include two RAID controllers and a group of disks in their own housing.

Using a RAID array is usually no different than using any other kind of primary storage. The RAID management will be handled by the hardware or software controller and is generally invisible to the end user.

RAID Technology and Standards

The Storage Networking Industry Association has established the Common RAID Disk Data Format (DDF) specification. In an effort to promote interoperability among different RAID vendors, it defines how data should be distributed across the disks in a RAID device.

Another industry group called the RAID Advisory Board worked during the 1990s to promote RAID technology, but the group is no longer active.


{{ source }}

big data analytics

Big data analytics refers to the process of collecting, organizing and analyzing large sets of data (“big data“) to discover patterns and other useful information. Big data analytics will help organizations to better understand the information contained within the data and will also help identify the data that is most important to the business and future business decisions. Big data analysts basically want the knowledge that comes from analyzing the data.

The Challenges of Big Data Analytics

For most organizations, big data analysis is a challenge. Consider the sheer volume of data and the many different formats of the data (both structured and unstructured data) collected across the entire organization and the many different ways different types of data can be combined, contrasted and analyzed to find patterns and other useful information.

The first challenge is in breaking down data silos to access all data an organization stores in different places and often in different systems. A second big data challenge is in creating platforms that can pull in unstructured data as easily as structured data. This massive volume of data is typically so large that it’s difficult to process using traditional database and software methods.

Big Data Requires High-Performance Analytics

To analyze such a large volume of data, big data analytics is typically performed using specialized software tools and applications for predictive analytics, data mining, text mining, forecasting and data optimization. Collectively these processes are separate but highly integrated functions of high-performance analytics. Using big data tools and software enables an organization to process extremely large volumes of data that a business has collected to determine which data is relevant and can be analyzed to drive better business decisions in the future.

Examples of How Big Data Analytics is Used Today

As technology to break down data silos and analyze data improves, business can be transformed in all sorts of ways. According to Datamation, today’s advances in analyzing Big Data allow researchers to decode human DNA in minutes, predict where terrorists plan to attack, determine which gene is mostly likely to be responsible for certain diseases and, of course, which ads you are most likely to respond to on Facebook. The business cases for leveraging Big Data are compelling. For instance, Netflix mined its subscriber data to put the essential ingredients together for its recent hit House of Cards, and subscriber data also prompted the company to bring Arrested Development back from the dead.

Another example comes from one of the biggest mobile carriers in the world. France’s Orange launched its Data for Development project by releasing subscriber data for customers in the Ivory Coast. The 2.5 billion records, which were made anonymous, included details on calls and text messages exchanged between 5 million users. Researchers accessed the data and sent Orange proposals for how the data could serve as the foundation for development projects to improve public health and safety. Proposed projects included one that showed how to improve public safety by tracking cell phone data to map where people went after emergencies; another showed how to use cellular data for disease containment. (source)

The Benefits of Big Data Analytics

Enterprises are increasingly looking to find actionable insights into their data. Many big data projects originate from the need to answer specific business questions. With the right big data analytics platforms in place, an enterprise can boost sales, increase efficiency, and improve operations, customer service and risk management.

Webopedia parent company, QuinStreet, surveyed 540 enterprise decision-makers involved in big data purchases to learn which business areas companies plan to use Big Data analytics to improve operations. About half of all respondents said they were applying big data analytics to improve customer retention, help with product development and gain a competitive advantage.

Notably, the business area getting the most attention relates to increasing efficiencies and optimizing operations. Specifically, 62 percent of respondents said that they use big data analytics to improve speed and reduce complexity.


Webopedia Big Data Resources
Webopedia Download: 2014 Big Data Outlook
What is Big Data?
What is Structured data?
Unstructured data Defined
How Much Data is Out There?
Big Data Analytics Expert Predictions


{{ source }}

Primary Data, la virtualisation pour simplifier le stockage

Fondée par l’ancienne équipe de Fusion-io, Primary Data propose de gérer toutes les capacités de stockage d’une entreprise au sein d’un seul pool virtuel.

CTO et co-fondateur de Primary Data, David Flynn n’est pas un inconnu pour les lecteurs du Monde Informatique puisqu’il était jusqu’en mai 2013 directeur général de la start-up Fusion-io que nous avons plusieurs fois rencontré à San José. Et le CEO de Primary Data n’est autre que Lance Smith, l’ancien COO de Fusion-io. Leur société ayant été acquise par SanDisk en juin 2014 pour un montant de 1,1 milliard de dollars, la fine équipe a rapidement rebondi en lançant Primary Data en août 2013, embarquant pour l’occasion leur conseiller scientifique de luxe, Steve Wozniak. Forte de 80 employés, la jeune pousse, installée à Los Altos, a levé 60 millions de dollars et annonce déjà 10 000 utilisateurs alors que sa solution n’est disponible que depuis quelques mois.

Alors que la virtualisation des serveurs a rendu l’informatique plus efficace et que la virtualisation du réseau commencer à faire le même travail pour les communications, le stockage reste – dans de nombreux cas – toujours lié à des plates-formes matérielles spécifiques. Des fournisseurs bien implantés comme EMC commencent certes à offrir des outils comme ViPR pour supporter différents systèmes, mais il s’agit avant tout de faciliter les migrations vers ses solutions. Primary Data se place dans une tout autre logique en séparant le contrôle des données du support du stockage grâce à une couche de virtualisation doté d’une extension vers le cloud. Une fois la phase de détection terminée et des agents (Data Hypervisor) installés sur les baies, les serveurs et les VM, toutes les capacités de stockage – du cloud aux baies flash –  font partie d’un espace global peut ensuite être réparti pour répondre à des besoins de haute performance ou de haute capacité, nous a indiqué Lance Smith.

Un protocole unique, NFS, pour fédérer bloc, objet et fichier

Cet espace virtuel peut s’étendre sur les systèmes blocs, objets et fichiers, en conservant les protocoles de transport spécifiques tels que Fibre Channel, mais le traitement de toutes les données passe en mode fichiers. Au lieu d’utiliser un nouveau protocole, le logiciel de données primaire repose sur le très utilisé NFS (Network File System). Le Data Hypervisor, épaulé par un Data Director qui gère les métadonnées de tous les fichiers stockés, répond à toutes les requêtes via un espace cache (voir illustration). Les données primaires sont aujourd’hui gérées par une appliance physique pour accélérer la localisation et le transfert des fichiers mais une version cloud – une VM donc – sera disponible en milieu d’année pour répondre à plusieurs demandes de clients.

primary-data

Un hyperviseur sur chaque machine assure la gestion des fichiers dans un espace globale.

« Avec la virtualisation des données, chaque utilisateur peut trouver un projet en utilisant un même nom de fichier universel au lieu d’avoir à se soucier de l’endroit ou il est stocké, a souligné le dirigeant. « Ainsi, une fois la politique établie, des fichiers peuvent être automatiquement déplacés d’un fuseau horaire à un autre. Lorsque la journée de travail se termine à Hollywood, des données primaires peuvent être déplacées vers un espace de stockage local et plus rapide à Singapour, où une autre équipe vient de commencer sa journée », a indiqué le CEO. Et à l’inverse, la datalocalisation des données est également possible pour répondre à des cadres juridiques spécifiques. Interrogé sur cette nouvelle aventure, Lance Smith a simplement avoué que l’idée de Primary Data avait commencé à germer chez Fusion-io.


{{ source }}

Avec Sanbolic, Citrix met un pied ans le stockage virtualisé

En rachetant Sanbolic, Citrix veut simplifier la gestion du stockage dans les environnements virtualisés et notamment le provisionnement des applications

Citrix Systems a acquis le vendeur de solutions de virtualisation de stockage Sanbolic, que nous avions rencontré à Boston en 2011. Ce rachat pourrait faciliter aux utilisateurs Citrix l’accès à des applications et à des postes de travail virtuels hébergés dans différents clouds et datacenters. La particularité du logiciel de Sanbolic étant de rendre les données disponibles partout où les applications en ont besoin.

Les solutions de Sanbolic permettent aux entreprises de minimiser la complexité liée au stockage dans les environnements virtualisés pour le VDI et le provisionnement d’applications. Quel que soit le type d’infrastructure, le logiciel de Sanbolic permet de traiter le stockage comme un système virtuel unique capable de comprendre les besoins de chaque application. L’ajout de cette capacité entre dans la stratégie de Citrix dont l’objectif est de fournir une plate-forme VDI efficace aux utilisateurs et de rendre les applications rapides et toujours disponibles.

Plus de 200 clients Citrix utilisent déjà Sanbolic

Selon le communiqué de presse de Citrix, « l’équipe de Sanbolic sera immédiatement intégrée à Citrix ». Les deux entreprises n’ont pas révélé les termes de l’accord. Basée à Waltham, Massachusetts, Sanbolic existe depuis 13 ans. « Plus de 200 clients de Citrix utilisent déjà la technologie de Sanbolic », a affirmé Citrix. Le rachat de Sanbolic va permettre à Citrix de réduire la complexité de l’infrastructure, un obstacle aux déploiements du VDI (Virtual Desktop Infrastructure) et à la technologie de livraison d’applications. La société prévoit d’utiliser Sanbolic avec ses produits XenDesktop et XenApp pour simplifier l’infrastructure et réduire les coûts de déploiement et de gestion.

Le stockage, qui peut être réparti sur plusieurs baies dédiées, intégré à des serveurs et alloué à des clouds publics et privés, s’oriente désormais vers la virtualisation qui a déjà transformé l’informatique d’entreprise et les réseaux. Même d’importants vendeurs de baies de stockage comme EMC commencent à mettre en avant des systèmes globaux (ViPR), plutôt que des plates-formes matérielles spécifiques. La solution qui permet à chaque application d’accéder aux bonnes données au moment où elle en a besoin pourrait apporter plus de liberté aux entreprises et leur permettre un déploiement souple et très efficace de leurs ressources.

Le logiciel de Sanbolic peut travailler avec des disques durs, des cartes flash, des NAS, des SAN, et peut gérer les déploiements sur serveur et dans le cloud. « Citrix va pouvoir développer de nouveaux produits basés sur cette technologie », a expliqué le vendeur. Selon Citrix, les clients pourront utiliser ces produits avec leurs équipements de stockage, de réseau et d’infrastructure existants.


{{ source }}

ETL – Extract, Transform, Load

ETL is short for extract, transform, load, three database functions that are combined into one tool to pull data out of one database and place it into another database.

ETL is used to migrate data from one database to another, to form data marts and data warehouses and also to convert databases from one format or type to another.


{{ source }}

ETL (Extract-Transform-Load)

ETL comes from Data Warehousing and stands for Extract-Transform-Load. ETL covers a process of how the data are loaded from the source system to the data warehouse. Currently, the ETL encompasses a cleaning step as a separate step. The sequence is then Extract-Clean-Transform-Load. Let us briefly describe each step of the ETL process.

Process

Extract

The Extract step covers the data extraction from the source system and makes it accessible for further processing. The main objective of the extract step is to retrieve all the required data from the source system with as little resources as possible. The extract step should be designed in a way that it does not negatively affect the source system in terms or performance, response time or any kind of locking.

There are several ways to perform the extract:

  • Update notification – if the source system is able to provide a notification that a record has been changed and describe the change, this is the easiest way to get the data.
  • Incremental extract – some systems may not be able to provide notification that an update has occurred, but they are able to identify which records have been modified and provide an extract of such records. During further ETL steps, the system needs to identify changes and propagate it down. Note, that by using daily extract, we may not be able to handle deleted records properly.
  • Full extract – some systems are not able to identify which data has been changed at all, so a full extract is the only way one can get the data out of the system. The full extract requires keeping a copy of the last extract in the same format in order to be able to identify changes. Full extract handles deletions as well.

When using Incremental or Full extracts, the extract frequency is extremely important. Particularly for full extracts; the data volumes can be in tens of gigabytes.

Clean

The cleaning step is one of the most important as it ensures the quality of the data in the data warehouse. Cleaning should perform basic data unification rules, such as:

  • Making identifiers unique (sex categories Male/Female/Unknown, M/F/null, Man/Woman/Not Available are translated to standard Male/Female/Unknown)
  • Convert null values into standardized Not Available/Not Provided value
  • Convert phone numbers, ZIP codes to a standardized form
  • Validate address fields, convert them into proper naming, e.g. Street/St/St./Str./Str
  • Validate address fields against each other (State/Country, City/State, City/ZIP code, City/Street).

Transform

The transform step applies a set of rules to transform the data from the source to the target. This includes converting any measured data to the same dimension (i.e. conformed dimension) using the same units so that they can later be joined. The transformation step also requires joining data from several sources, generating aggregates, generating surrogate keys, sorting, deriving new calculated values, and applying advanced validation rules.

Load

During the load step, it is necessary to ensure that the load is performed correctly and with as little resources as possible. The target of the Load process is often a database. In order to make the load process efficient, it is helpful to disable any constraints and indexes before the load and enable them back only after the load completes. The referential integrity needs to be maintained by ETL tool to ensure consistency.

Managing ETL Process

The ETL process seems quite straight forward. As with every application, there is a possibility that the ETL process fails. This can be caused by missing extracts from one of the systems, missing values in one of the reference tables, or simply a connection or power outage. Therefore, it is necessary to design the ETL process keeping fail-recovery in mind.

Staging

It should be possible to restart, at least, some of the phases independently from the others. For example, if the transformation step fails, it should not be necessary to restart the Extract step. We can ensure this by implementing proper staging. Staging means that the data is simply dumped to the location (called the Staging Area) so that it can then be read by the next processing phase. The staging area is also used during ETL process to store intermediate results of processing. This is ok for the ETL process which uses for this purpose. However, tThe staging area should is be accessed by the load ETL process only. It should never be available to anyone else; particularly not to end users as it is not intended for data presentation to the end-user.may contain incomplete or in-the-middle-of-the-processing data.

ETL Tool Implementation

When you are about to use an ETL tool, there is a fundamental decision to be made: will the company build its own data transformation tool or will it use an existing tool?

Building your own data transformation tool (usually a set of shell scripts) is the preferred approach for a small number of data sources which reside in storage of the same type. The reason for that is the effort to implement the necessary transformation is little due to similar data structure and common system architecture. Also, this approach saves licensing cost and there is no need to train the staff in a new tool. This approach, however, is dangerous from the TOC point of view. If the transformations become more sophisticated during the time or there is a need to integrate other systems, the complexity of such an ETL system grows but the manageability drops significantly. Similarly, the implementation of your own tool often resembles re-inventing the wheel.

There are many ready-to-use ETL tools on the market. The main benefit of using off-the-shelf ETL tools is the fact that they are optimized for the ETL process by providing connectors to common data sources like databases, flat files, mainframe systems, xml, etc. They provide a means to implement data transformations easily and consistently across various data sources. This includes filtering, reformatting, sorting, joining, merging, aggregation and other operations ready to use. The tools also support transformation scheduling, version control, monitoring and unified metadata management. Some of the ETL tools are even integrated with BI tools.

Some of the Well Known ETL Tools

The most well known commercial tools are Ab Initio, IBM InfoSphere DataStage, Informatica, Oracle Data Integrator and SAP Data Integrator.

There are several open source ETL tools, among others Apatar, CloverETL, Pentaho and Talend.


{{ source }}

Data Integration Information

informational website

In today’s world, volumes of data grow exponentially in all realms from personal data to enterprise and global data. Thus it is becoming extremely important to be able to understand data sets and organize them. Such disciplines as data integration, migration, synchronization, business intelligence etc. allow this. This Data Integration Info site strives to explain and describe these ideas and concepts of this complex landscape of data management fields.

Data Integration

Data integration involves combining data from several disparate sources, which are stored using various technologies and provide a unified view of the data. Data integration becomes increasingly important in cases of merging systems of two companies or consolidating applications within one company to provide a unified view of the company’s data assets.
Read more on data integration >

Data Migration

Data Migration is the process of transferring data from one system to another while changing the storage, database or application. In reference to the ETL (Extract-Transform-Load) process, data migration always requires at least Extract and Load steps.
Read more on data migration >

Data Synchronization

Data Synchronization is a process of establishing consistency among systems and subsequent continuous updates to maintain consistency. The word ‘continuous’ should be stressed here as the data synchronization should not be considered as a one-time task.
Read more on data synchronization >

ETL

ETL comes from Data Warehousing and stands for Extract-Transform-Load. ETL covers a process of how the data are loaded from the source system to the data warehouse.
Read more on ETL >

Business Intelligence

Business Intelligence (BI) is a set of tools supporting the transformation of raw data into useful information which can support decision making. Business Intelligence provides reporting functionality, tools for identifying data clusters, support for data mining techniques, business performance management and predictive analysis.
Read more on business intelligence >

Master Data Management

Master Data Management (MDM) represents a set of tools and processes used by an enterprise to consistently manage their non-transactional data.
Read more on Master Data Management >


{{ source }}

Extract, transform, load

In computing, Extract, Transform and Load (ETL) refers to a process in database usage and especially in data warehousing that:

Usually all the three phases execute in parallel since the data extraction takes time, so while the data is being pulled another transformation process executes, processing the already received data and prepares the data for loading and as soon as there is some data ready to be loaded into the target, the data loading kicks off without waiting for the completion of the previous phases.

ETL systems commonly integrate data from multiple applications(systems), typically developed and supported by different vendors or hosted on separate computer hardware. The disparate systems containing the original data are frequently managed and operated by different employees. For example a cost accounting system may combine data from payroll, sales and purchasing.

Extract

The first part of an ETL process involves extracting the data from the source systems. In many cases this is the most challenging aspect of ETL, since extracting data correctly sets the stage for the success of subsequent processes.

Most data warehousing projects consolidate data from different source systems. Each separate system may also use a different data organization and/or format. Common data source formats are relational databases, XMLs and flat files, but may include non-relational database structures such as Information Management System (IMS) or other data structures such as Virtual Storage Access Method (VSAM) or Indexed Sequential Access Method (ISAM), or even fetching from outside sources such as through web spidering or screen-scraping. The streaming of the extracted data source and load on-the-fly to the destination database is another way of performing ETL when no intermediate data storage is required. In general, the goal of the extraction phase is to convert the data into a single format appropriate for transformation processing .

An intrinsic part of the extraction involves data validation to confirm if the data pulled from the sources have the correct/expected values in a given domain (such as a pattern/default or list of values) . In case the data fails the validation rules it is rejected entirely or in part. The rejected data is ideally reported back to the source system for further analysis to identify and rectify the incorrect records. In cases the extraction process itself may have to modify the data validation rule in order to accept the data to flow to the next phase.

ETL_Architecture_Pattern

Transform

The data transformation stage applies a series of rules or functions to the extracted data from the source to derive the data for loading into the end target. Some data do not require any transformation at all; known as direct move or pass through data in technical terms.

An important function of data transformation is cleansing of data that aims to pass only proper data to the target. When different systems interact with each other; based on how these systems store data, there is a challenge in interfacing/communicating with each other. Certain character set that may be available in one system may not be available in other. These cases must be handled correctly or eventually lead to a number of data quality related issues.

In other cases, one or more of the following transformation types may be required to meet the business and technical needs of the server or data warehouse:

  • Selecting only certain columns to load: (or selecting null columns not to load). For example, if the source data has three columns (also called attributes), roll_no, age, and salary, then the selection may take only roll_no and salary. Similarly, the selection mechanism may ignore all those records where salary is not present (salary = null).
  • Translating coded values: (e.g., if the source system stores 1 for male and 2 for female, but the warehouse stores M for male and F for female)
  • Encoding free-form values: (e.g., mapping “Male” to “M”)
  • Deriving a new calculated value: (e.g., sale_amount = qty * unit_price)
  • Sorting: Order the data based on a list of columns to improve searching
  • Joining data from multiple sources (e.g., lookup, merge) and deduplicating the data
  • Aggregation (for example, rollup — summarizing multiple rows of data — total sales for each store, and for each region, etc.)
  • Generating surrogate-key values
  • Transposing or pivoting (turning multiple columns into multiple rows or vice versa)
  • Splitting a column into multiple columns (e.g., converting a comma-separated list, specified as a string in one column, into individual values in different columns)
  • Disaggregation of repeating columns into a separate detail table (e.g., moving a series of addresses in one record into single addresses in a set of records in a linked address table)
  • Look up and validate the relevant data from tables or referential files for slowly changing dimensions.
  • Applying any form of simple or complex data validation. If validation fails, it may result in a full, partial or no rejection of the data, and thus none, some or all the data are handed over to the next step, depending on the rule design and exception handling. Many of the above transformations may result in exceptions, for example, when a code translation parses an unknown code in the extracted data.

Load

The load phase loads the data into the end target that may be a simple de-limited flat file or a data warehouse. Depending on the requirements of the organization, this process varies widely. Some data warehouses may overwrite existing information with cumulative information; updating extracted data is frequently done on a daily, weekly, or monthly basis. Other data warehouses (or even other parts of the same data warehouse) may add new data in an historical form at regular intervals—for example, hourly. To understand this, consider a data warehouse that is required to maintain sales records of the last year. This data warehouse overwrites any data older than a year with newer data. However, the entry of data for any one year window is made in a historical manner. The timing and scope to replace or append are strategic design choices dependent on the time available and the business needs. More complex systems can maintain a history and audit trail of all changes to the data loaded in the data warehouse.

As the load phase interacts with a database, the constraints defined in the database schema — as well as in triggers activated upon data load — apply (for example, uniqueness, referential integrity, mandatory fields), which also contribute to the overall data quality performance of the ETL process.

  • For example, a financial institution might have information on a customer in several departments and each department might have that customer’s information listed in a different way. The membership department might list the customer by name, whereas the accounting department might list the customer by number. ETL can bundle all of these data elements and consolidate them into a uniform presentation, such as for storing in a database or data warehouse.
  • Another way that companies use ETL is to move information to another application permanently. For instance, the new application might use another database vendor and most likely a very different database schema. ETL can be used to transform the data into a format suitable for the new application to use.

Real-life ETL cycle

The typical real-life ETL cycle consists of the following execution steps:

  1. Cycle initiation
  2. Build reference data
  3. Extract (from sources)
  4. Validate
  5. Transform (clean, apply business rules, check for data integrity, create aggregates or disaggregates)
  6. Stage (load into staging tables, if used)
  7. Audit reports (for example, on compliance with business rules. Also, in case of failure, helps to diagnose/repair)
  8. Publish (to target tables)
  9. Archive
  10. Clean up

Challenges

ETL processes can involve considerable complexity, and significant operational problems can occur with improperly designed ETL systems.

The range of data values or data quality in an operational system may exceed the expectations of designers at the time validation and transformation rules are specified. Data profiling of a source during data analysis can identify the data conditions that must be managed by transform rules specifications. This leads to an amendment of validation rules explicitly and implicitly implemented in the ETL process.

Data warehouses are typically assembled from a variety of data sources with different formats and purposes. As such, ETL is a key process to bring all the data together in a standard, homogeneous environment.

Design analysts should establish the scalability of an ETL system across the lifetime of its usage. This includes understanding the volumes of data that must be processed within service level agreements. The time available to extract from source systems may change, which may mean the same amount of data may have to be processed in less time. Some ETL systems have to scale to process terabytes of data to update data warehouses with tens of terabytes of data. Increasing volumes of data may require designs that can scale from daily batch to multiple-day micro batch to integration with message queues or real-time change-data capture for continuous transformation and update.

Performance

ETL vendors benchmark their record-systems at multiple TB (terabytes) per hour (or ~1 GB per second) using powerful servers with multiple CPUs, multiple hard drives, multiple gigabit-network connections, and lots of memory. The fastest ETL record is currently held by Syncsort,[1] Vertica and HP at 5.4TB in under an hour, which is more than twice as fast as the earlier record held by Microsoft and Unisys.

In real life, the slowest part of an ETL process usually occurs in the database load phase. Databases may perform slowly because they have to take care of concurrency, integrity maintenance, and indices. Thus, for better performance, it may make sense to employ:

  • Direct Path Extract method or bulk unload whenever is possible (instead of querying the database) to reduce the load on source system while getting high speed extract
  • Most of the transformation processing outside of the database
  • Bulk load operations whenever possible.

Still, even using bulk operations, database access is usually the bottleneck in the ETL process. Some common methods used to increase performance are:

  • Partition tables (and indices). Try to keep partitions similar in size (watch for null values that can skew the partitioning).
  • Do all validation in the ETL layer before the load. Disable integrity checking (disable constraint …) in the target database tables during the load.
  • Disable triggers (disable trigger …) in the target database tables during the load. Simulate their effect as a separate step.
  • Generate IDs in the ETL layer (not in the database).
  • Drop the indices (on a table or partition) before the load – and recreate them after the load (SQL: drop index; create index …).
  • Use parallel bulk load when possible — works well when the table is partitioned or there are no indices. Note: attempt to do parallel loads into the same table (partition) usually causes locks — if not on the data rows, then on indices.
  • If a requirement exists to do insertions, updates, or deletions, find out which rows should be processed in which way in the ETL layer, and then process these three operations in the database separately. You often can do bulk load for inserts, but updates and deletes commonly go through an API (using SQL).

Whether to do certain operations in the database or outside may involve a trade-off. For example, removing duplicates using distinct may be slow in the database; thus, it makes sense to do it outside. On the other side, if using distinct significantly (x100) decreases the number of rows to be extracted, then it makes sense to remove duplications as early as possible in the database before unloading data.

A common source of problems in ETL is a big number of dependencies among ETL jobs. For example, job “B” cannot start while job “A” is not finished. One can usually achieve better performance by visualizing all processes on a graph, and trying to reduce the graph making maximum use of parallelism, and making “chains” of consecutive processing as short as possible. Again, partitioning of big tables and of their indices can really help.

Another common issue occurs when the data are spread among several databases, and processing is done in those databases sequentially. Sometimes database replication may be involved as a method of copying data between databases – and this can significantly slow down the whole process. The common solution is to reduce the processing graph to only three layers:

  • Sources
  • Central ETL layer
  • Targets

This allows processing to take maximum advantage of parallel processing. For example, if you need to load data into two databases, you can run the loads in parallel (instead of loading into 1st – and then replicating into the 2nd).

Sometimes processing must take place sequentially. For example, dimensional (reference) data are needed before one can get and validate the rows for main “fact” tables.

Parallel processing

A recent development in ETL software is the implementation of parallel processing. This has enabled a number of methods to improve overall performance of ETL processes when dealing with large volumes of data.

ETL applications implement three main types of parallelism:

  • Data: By splitting a single sequential file into smaller data files to provide parallel access.
  • Pipeline: Allowing the simultaneous running of several components on the same data stream. For example: looking up a value on record 1 at the same time as adding two fields on record 2.
  • Component: The simultaneous running of multiple processes on different data streams in the same job, for example, sorting one input file while removing duplicates on another file.

All three types of parallelism usually operate combined in a single job.

An additional difficulty comes with making sure that the data being uploaded is relatively consistent. Because multiple source databases may have different update cycles (some may be updated every few minutes, while others may take days or weeks), an ETL system may be required to hold back certain data until all sources are synchronized. Likewise, where a warehouse may have to be reconciled to the contents in a source system or with the general ledger, establishing synchronization and reconciliation points becomes necessary.

Rerunnability, recoverability

Data warehousing procedures usually subdivide a big ETL process into smaller pieces running sequentially or in parallel. To keep track of data flows, it makes sense to tag each data row with “row_id”, and tag each piece of the process with “run_id”. In case of a failure, having these IDs help to roll back and rerun the failed piece.

Best practice also calls for checkpoints, which are states when certain phases of the process are completed. Once at a checkpoint, it is a good idea to write everything to disk, clean out some temporary files, log the state, and so on.

Virtual ETL

As of 2010 data virtualization had begun to advance ETL processing. The application of data virtualization to ETL allowed solving the most common ETL tasks of data migration and application integration for multiple dispersed data sources. So-called Virtual ETL operates with the abstracted representation of the objects or entities gathered from the variety of relational, semi-structured and unstructured data sources. ETL tools can leverage object-oriented modeling and work with entities’ representations persistently stored in a centrally located hub-and-spoke architecture. Such a collection that contains representations of the entities or objects gathered from the data sources for ETL processing is called a metadata repository and it can reside in memory[2] or be made persistent. By using a persistent metadata repository, ETL tools can transition from one-time projects to persistent middleware, performing data harmonization and data profiling consistently and in near-real time.[citation needed]

Dealing with keys

Keys are some of the most important objects in all relational databases, as they tie everything together. A primary key is a column that identifies a given entity, where a foreign key is a column in another table that refers a primary key. These keys can also be made of several columns, in which case they are composite keys. In many cases the primary key is an auto generated integer that has no meaning for the business entity being represented, but solely exists for the purpose of the relational database – commonly referred to as a surrogate key.

As there is usually more than one data source being loaded into the warehouse, the keys are an important concern to be addressed.
Your customers might be represented in several data sources, and in one their SSN (Social Security Number) might be the primary key, their phone number in another and a surrogate in the third. All of the customers information needs to be consolidated into one dimension table.

A recommended way to deal with the concern is to add a warehouse surrogate key, which is used as a foreign key from the fact table.[3]

Usually updates occur to a dimension’s source data, which obviously must be reflected in the data warehouse.
If the primary key of the source data is required for reporting, the dimension already contains that piece of information for each row. If the source data uses a surrogate key, the warehouse must keep track of it even though it is never used in queries or reports.

That is done by creating a lookup table that contains the warehouse surrogate key and the originating key.[4] This way the dimension is not polluted with surrogates from various source systems, while the ability to update is preserved.

The lookup table is used in different ways depending on the nature of the source data. There are 5 types to consider,[5] where three selected ones are included here:
Type 1:
– The dimension row is simply updated to match the current state of the source system. The warehouse does not capture history. The lookup table is used to identify the dimension row to update or overwrite.
Type 2:
– A new dimension row is added with the new state of the source system. A new surrogate key is assigned. Source key is no longer unique in the lookup table.
Fully logged:
– A new dimension row is added with the new state of the source system, while the previous dimension row is updated to reflect it is no longer active and record time of deactivation.

Tools

Programmers can set up ETL processes using almost any programming language, but building such processes from scratch can become complex. Increasingly, companies are buying ETL tools to help in the creation of ETL processes.[6]

By using an established ETL framework, one may increase one’s chances of ending up with better connectivity and scalability.[citation needed] A good ETL tool must be able to communicate with the many different relational databases and read the various file formats used throughout an organization. ETL tools have started to migrate into Enterprise Application Integration, or even Enterprise Service Bus, systems that now cover much more than just the extraction, transformation, and loading of data. Many ETL vendors now have data profiling, data quality, and metadata capabilities. A common use case for ETL tools include converting CSV files to formats readable by relational databases. A typical translation of millions of records is facilitated by ETL tools that enable users to input csv-like data feeds/files and import it into a database with as little code as possible.

ETL Tools are typically used by a broad range of professionals – from students in computer science looking to quickly import large data sets to database architects in charge of company account management, ETL Tools have become a convenient tool that can be relied on to get maximum performance. ETL tools in most cases contain a GUI that helps users conveniently transform data as opposed to writing large programs to parse files and modify data types—which ETL tools facilitate as much as possible.[citation needed]

Commercial Tools

Commercially available ETL tools include:

  • Informatica PowerCenter
  • IBM Datastage
  • Ab Initio
  • Microstrategy
  • Oracle Data Integrator (ODI)
  • Microsoft SQL Server Integration Services (SSIS)
  • Pentaho Data Integration (or Kettle)
  • Talend

See also

References

  1. “New ETL World Record: 5.4 TB Loaded in Under 1 Hour – Syncsort”
  2. Virtual ETL
  3. (Kimball, The Data Warehouse Lifecycle Toolkit, p 332)
  4. Golfarelli/Rizzi, Data Warehouse Design, p 291
  5. Golfarelli/Rizzi, Data Warehouse Design, p 291
  6. ETL poll produces unexpected results

{{ source }}

Moving past the illusion of data democracy

Individuals can no longer be their own privacy enforcers

Choice permeates nearly every facet of American life. We celebrate our freedom to voice a preference with every election and every episode of American Idol. We also want a choice in information privacy– the power to dictate exactly how our personal data is collected and used.

In a recent Foreign Affairs note, Ann Cavoukian, former Information and Privacy Commissioner of Ontario, Canada, put it this way: “When it comes to regulating privacy, let the people decide.” This concept of privacy as choice originated with Alan Westin, in his groundbreaking 1967 book Privacy and Freedom. He defined privacy as “…the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others.

The reality, however, is that pervasive data gathering and analytical techniques fueled by advances in communication and information technology are rendering true data democracy obsolete.

The idea of letting the people decide is appealing. We each have different ideas of what should and should not be done with our information, and we all seek some level of protection from prying eyes. With that, policymakers developed a “notice and choice” framework, requiring people to decide at the point of data collection whether they accept the specified uses of their information.

The limitations on this approach are, by now, obvious. In practice, it means providing consumers with incomprehensible legalistic privacy policies, which no one reads, but that are treated as “informed consent” for companies to do as they please. One study estimated that if an average consumer read the privacy policies of all the websites they visited, it would take 224 hours a year.

The problem is only getting worse as the Internet of Things continues its rapid expansion. When refrigerators, automobiles, smartphones, and just about every object in daily life are all equipped with communications capabilities, it will be impossible to execute a privacy framework in which consumers can examine all the possible data uses before information is collected.

In today’s world of pervasive data collection and use, this blind insistence on a data democracy provides only the illusion of individual control. It is a fake mechanism of autonomy, offering no real consumer protection.

There is an alternative. Years ago, former Obama Administration official Danny Weitzner put it this way: “Consumers should not have to agree in advance to complex policies with unpredictable outcomes Instead, they should be confident that there will be redress if they are harmed by improper use of the information they provide…”

A reform movement is gaining steam. The goal is to prevent consumer harm—to ensure that information is not used in a way that is adverse to an individual’s legitimate interests. The basic concept is to make companies and institutions responsible for how they use collected data. There will clearly be some role for consumers in this regime, but they will not be the only, or even the primary, agent of enforcement.

The Obama Administration’s recent big data report moved in this direction, recommending that policymakers “look closely at the notice and consent framework that has been a central pillar of how privacy practices have been organized….” The accompanying report from the President’s Council of Advisors on Science and Technology is more direct, urging that “policy attention should focus more on the actual uses of big data and less on its collection and analysis.”

The answer is to focus on vigorous enforcement of existing laws that have proven effective at governing the collection and use of personal information. Policymakers and regulators must be vigilant in monitoring business activity to make certain our legal framework offers adequate consumer protection, and they should consider new restrictions on data collection and use only when real consumer harm is proven.

While it may sound paternalistic to have consumers protected instead of actively protecting themselves, the era of privacy notices has passed. Every new innovation in the Internet of Things provides another crack in the illusion of data democracy. It’s time to move beyond this outdated notion — just as it would make no sense for each of us to become our own meat inspector or bank examiner, it no longer makes sense to expect each of us to be our own privacy enforcer.


{{ source }}