Introduction to Big Data Haifeng Li

http://haifengl.wordpress.com



Contents

Chapter 1

Introduction

1.1 What’s Big Data?

Gartner, and now much of the industry, use the “3Vs” model [55] for describing big data:

Big data is high volume, high velocity, and/or high variety information assets that require new forms of processing to enable enhanced decision making, insight discovery and process optimization.

It is no doubt that today’s systems are processing huge amount of data every day. For example, Facebook’s Hive data warehouse holds 300 PB data with an incoming daily rate of about 600 TB in April, 2014 [78]! This example also shows us that big data is fast data, too. Without high speed data generation and capture, we won’t quickly accumulate a large amount of data to process. According to IBM, 90% of the data in the world today has been created over the last two years alone [48]. High variety (i.e. unstructured data) is another important aspect of big data. It refers to information that either does not have a pre-defined data model or format. Traditional data processing systems (e.g. relational data warehouse) may handle large volume of rigid relational data but they are not flexible to process semi-structure or unstructured data. New technologies have to be developed to handle data from various sources, e.g. texts, social networks, image data, etc.

The 3Vs model nicely describe several major aspects of big data. Since then, people added more Vs (e.g. Variability, Veracity) to the list. However, do 3Vs (or 4Vs, 5Vs, …) really capture the core characteristics of big data? Probably not. We are processing data in the scale of petabyte or even exabyte today. But big is always relative, right? Although 1TB data is not that big today, it was big and very challenging to process 20 years ago. Recall the fastest supercomputer in 1994, Fujitsu Numerical Wind Tunnel, had the peak speed of 170 GFLOPS [76]. Well, a Nvidia K40 GPU in a PC has the power of 1430 GFLOPS today [80]. Besides software innovations (e.g. GFS and MapReduce) also helped a lot to process bigger and bigger data. With the advances of technologies, today’s big data will quickly become small in tomorrow’s standard. The same thing holds for “high velocity”. So high volume and high velocity are not the core of big data movement even though they are the driving force of technology advancement. How about “high variety”? Many people read it as unstructured data which can not be well handled by RDBMS. But unstructured data have always been there no matter how they are stored, processed, and analyzed. We do handle text, voice, images and videos better today with the advances in NoSQL, natural language processing, information retrieval, computer vision, and pattern recognition. But it is still about the technology advancement rather than intrinsic value of big data.

From the business point of view, we may understand big data better. Although data is a valuable corporate asset, it is just soil, not oil. Without analysis, they are pretty much useless. But extremely valuable knowledge and insights can be discovered from data. No matter how you call this analytic process (data science, business intelligence, machine learning, data mining, or information retrieval), the business goal is the same: higher competency gained from the discovered knowledge and insights. But wait a second. does not the idea of data analytics exist for a long time? So what’re the real differences between today’s “big data” analytics and traditional data analytics? Looking back to web data analysis, the origin of big data, we will find that big data means proactively learning and understanding the customers, their needs, behaviors, experience, and trends in near real-time and 24×7. On the other hand, traditional data analytics is passive/reactive, treats customers as a whole or segments rather than individuals, and there is significant time lag. Check out the applications of big data, a lot of them is about

User Experience and Behavior Analysis

Personalization

Recommendation

which you rarely find in business intelligence applications . New applications, e.g. smart grid and Internet of things, are pushing this real-time proactive analysis forward to the whole environment and context. Therefore, the fundamental objective of big data is to help the organizations turn data into actionable information for identifying new opportunities, recognizing operational issues and problems, and better decision-making, etc. This is the driving force for corporations to embrace big data.

How did this shift happen? The data have been changing. Traditionally, our databases are just the systems of records, which are manually input by people. In contrast, a large part of big data is log data, which are generated by applications and record every interaction between users and systems. Some people call them machine generated data to emphasize the speed of data generation and the size of data. But the truth is that they are triggered by human actions (event is probably a better name of these data). The Internet of things will help us even to understand the environment and context of user actions. The analysis on events results in a better understanding of every single user and thus yield improved user experience and bigger revenue, a lovely win-win for both customers and business.

1.2 Business Use Cases

Big data is not just a hype but can bring great values to business. In what follows, we will discuss some use case of big data in different areas and industries. The list can go very long but we will focus on several important cases to show how big data can help solve business challenges.

1.2.1 CRM

Customer relationship management (CRM) is for managing a company’s interactions with current and future customers. By integrating big data into a CRM solution, companies can learn customer behavior, identify sales opportunities, analyze customers’ sentiment, and improve customer experience to increase customer engagement and bring greater profits.

Using big data, organizations can collect more accurate and detailed information to gain the 360 view of customers. The analysis of all the customers’ touch points, such as browsing history , social media, email, and call center, enable companies to gain a much more complete and deeper understanding of customer behavior – what ads attract them, why they buy, how they shop, what they buy together, what they’ll buy next, why they switch, how they recommend a product/service in their social network, etc. Once actionable insights are discovered, companies will more likely rise above industry standards.

Big data also enable comprehensive benchmarking over the time. For example, banks, telephone service companies, Internet service providers, pay TV companies, insurance firms, and alarm monitoring services, often use customer attrition analysis and customer attrition rates as one of their key business metrics because the cost of retaining an existing customer is far less than acquiring a new one [68]. Moreover, big data enables service providers to move from reactive churn management to proactive customer retention with predictive modeling before customers explicitly start the switch.

1.2.2 HCM

Human capital management (HCM) supposes to maximize employee performance in service of their employer’s strategic objectives. However, current HCM systems are mostly bookkeeping. For example, many HCM softwares/services provide [3]

Enrolling or changing benefits information

Reporting life events such as moving or having a baby

Acknowledging company policies

Viewing pay statements and W-2 information

Changing W-4 tax information

Managing a 401(k) account

Viewing the company directory

Submitting requisition requests

Approving leave requests

Managing performance and goals

Viewing team calendars

These are all important HR tasks. However, they are hardly associated to “maximize employee performance”. Even worse, current HCM systems are passive. Taking performance and goals management as an example, one and his/her manager enter the goals at the beginning of years and input the performance evaluations and feedbacks at the end of year. So what? If low performance happened, it has already happened for most of the year!

With big data, HCM systems can help HR practitioners and managers to actively measure, monitor and improve employee performance. Although it is pretty hard to measure employee performance in real time, especially for long term projects, studies show a clear correlation between engagement and performance – and most importantly between improving engagement and improving performance [57]. That is, organizations with a highly engaged workforce significantly outperform those without.

Engagement analytics has been an active research area in CRM and many technologies can be borrowed to HCM. For example, churn analysis can be used to understand the underlying patterns of employee turnover. With big data, HCM systems can predict which high-performing employees are likely to leave a company in the next year and then offers possible actions (higher compensation and/or new job) that might make them stay. For corporations, they simply want to know their employees as well as they know their customers. From this point of view, it does make a lot of sense to connect HCM and CRM together with big data to shorten the communication paths between inside and outside world.

1.2.3 IoT

The Internet of Things is the interconnection of uniquely identifiable embedded computing devices within the Internet infrastructure. IoT is representing the next big wave in the evolution of the Internet. The combination of big data and IoT is producing huge opportunities for companies in all industries. Industries such as manufacturing, mobility and retail have already been leveraging the data generated by billions of devices to provide new levels of operational and business insights.

Industrial companies are progressing in creating financial value by gathering and analyzing vast volumes of machine sensor data [37]. Additionally, some companies are progressing to leverage insights from machine asset data to create efficiencies in operations and drive market advantages with greater confidence. For example, Thames Water Utilities Limited, the largest provider of water and wastewater services in the UK, is using sensors, analytics and real-time data to help the utility respond more quickly to critical situations such as leaks or adverse weather events [2].

Smart grid, an advanced application of IoT, is profoundly changing the fundamentals of urban areas throughout the world. Multiple cities around the world are conducting the so called smart city trials. For example, the city of Seattle is applying analytics to optimize energy usage by identifying equipment and system inefficiencies, and alerting building managers to areas of wasted energy. Elements in each room of a building – such as lighting, temperature and the position of window shades – can then be adjusted, depending on data readings, to maximize efficiency [1].

1.2.4 Healthcare

Healthcare is a big industry and contribute to a significant part of a country’s economy (in fact 17.7% of GDP in USA). Big data can improve our ability to treat illnesses, e.g. recognizing individuals who are at risk for serious health problems. It can also identify waste in the healthcare system and thus lower the cost of healthcare across the board.

A recent exciting advance in applying big data to healthcare is IBM Watson. IBM Watson is an artificially intelligent computer system capable of answering questions posed in natural language [49]. Watson may work as a clinical decision support system for medical professionals based on its natural language, hypothesis generation, and evidence-based learning capabilities [47, 46]. When a doctor asks Watson about symptoms and other related factors, Watson first parses the input to identify the most important pieces of information; then mines patient data to find facts relevant to the patient’s medical and hereditary history; then examines available data sources to form and test hypotheses; and finally provides a list of individualized, confidence-scored recommendations. The sources of data that Watson uses for analysis can include treatment guidelines, electronic medical record data, notes from doctors and nurses, research materials, clinical studies, journal articles, and patient information.

1.3 Audience

This book is created as an overview of the Big Data technologies, geared toward software architects and advanced developers. Prior experience with Big Data, as either a user or a developer, is not necessary. As this young area is evolving at an amazing speed, we do not intend to cover how to use software tools or their APIs in details, which will become obsolete very soon. Instead, we focus how these systems are designed and why in this way. We hope that you get a better understanding of Big Data and thus make the best use of it.

1.4 Roadmap

Although the book is made for technologists, we start with a brief discussion of data management. Frequently, technologists are lost in the trees of technical details without seeing the whole forest. As we discussed earlier, Big Data is meant to meet business needs in a data-driven approach. To make Big Data a success, executives and managers need all the disciplines to manage data as a valuable resource. Chapter 2 brings up a framework to define a successful data strategy.

Chapter 3 is a deep diving into Apache Hadoop, the de facto Big Data platform. Apache Hadoop is an open-source software framework for distributed storage and distributed processing of big data on clusters of commodity hardware. Especially, we discuss HDFS, MapReduce, Tez and YARN.

Chapter 4 is a discussion on Apache Spark, the new hot buzzword in Big Data. Although MapReduce is great for large scale data processing, it is not friendly for iterative algorithms or interactive analytics. Apache Spark is designed to solve this problem by reusing the working dataset.

MapReduce and Spark enable us to crunch numbers in a massive parallel way. However, they provide relatively low level APIs. To quickly obtain actionable insights from data, we would like to employ some data warehouse built on top of them. In Chapter 5, we cover Pig and Hive that translate high level DSL or SQL to native MapReduce/Tez code. Similarly, Shark and Spark SQL bring SQL on top of Spark. Moreover, we discuss Cloudera Impala and Apache Drill that are native massively parallel processing query engines for interactive analysis of web-scale datasets.

In Chapter 6, we discuss several operational NoSQL databases that are designed for horizontal scaling and high availability.

Although the book can be read sequentially straight through, you can comfortably break between the chapters. For example, you may jump directly into the NoSQL chapter while skipping Hadoop and Spark.

Chapter 2

Data Management

Big Data is to solve complex enterprise optimization problems. To make the best use of Big Data, we have to recognize that data is a vital corporate asset as data is the lifeblood of the Internet economy. Today organizations rely on data science to make more informed and more effective decisions, which create competitive advantages through innovative products and operational efficiencies.

However, data is firstly a debt. The costs of data acquisition, hardware, software, operation, and talents are very high. Without the right management, it is unlikely for us to effectively extract values from data. To make big data a success, we must have all the disciplines to manage data as a valuable resource. Data management is much broader than database management. It is a systematic process of capturing, delivering, operating, protecting, enhancing, and disposing of the data cost-effectively, which needs the ever-going reinforcement of plans, policies, programs and practices.

The ultimate goal of data management is to increase the value proposition of the data. It requires serious and careful consideration and should start with a data strategy that defines a roadmap to meet the business needs in a data-driven approach. To create a data strategy, think carefully of the following questions:

What problem do we try to solve? What value can big data bring in? Big data is hot and thus many corporations are hugging it. However, big data for the sake of big data is apparently wrong. Other’s use cases do not have to be yours. To glean the value of big data, a deep understanding of your business and problems to solve is essential.

Who holds the data, who owns the data, and who can access the data? Data governance is a set of processes that ensures that important data assets are formally managed throughout the enterprise. Through data governance, we expect data stewards and data custodians to exercise positive control over the data. Data custodians are responsible for the safe custody, transport, and storage of the data while data stewards are responsible for the management of data elements – both the content and metadata.

What data do we need? It may seem obvious, but it is often simply answered with “I do not know” or “Everything”, which indicates a lack of understanding business practices. Whenever this happens, we should go back to answer the first question again. How to acquire the data? Data may be collected from internal system of records, log files, surveys, or third parties. The transactional systems may be revised to collect necessary data for analytics.

Where to store the data and how long to keep them? Due to the variety of data, today’s data may be stored in various databases (relational or NoSQL), data warehouses, Hadoop, etc. Today, database management is way beyond relational database administration. Because big data is also fast data, it is impractical to keep all of the data forever. Careful thoughts are needed to determine the lifespan of data.

How to ensure the data quality? Junk in, Junk out. Without ensuring the data quality, big data won’t bring any values to the business. With the advent of big data, data quality management is both more important and more challenging than ever.

How to analyze and visualize the data? A large number of mathematical models are available for analyzing data. Simply applying mathematical models does not necessarily result in actionable insights. Before talking about your mathematical models, go understand your business and problems. Lead the model with your insights (or a priori in terms of machine learning) rather than be lead by the uninterpretable numbers of black box models. Besides, visualization is extremely helpful to explore data and present the analytic results as a picture is worth a thousand words.

in terms of machine learning) rather than be lead by the uninterpretable numbers of black box models. Besides, visualization is extremely helpful to explore data and present the analytic results as a picture is worth a thousand words. How to manage the complexity? Big data is extremely complicated. To manage the complexity and improve the data management practices, we need to develop the accountability framework to encourage desirable behavior, which is tailored to the organization’s business strategies, strengths and priorities.

We believe that a good data strategy will emerge after thinking through and answer the above questions.

Chapter 3

Hadoop

HDFS is a distributed high-throughput file system

MapReduce for job framework of parallel data processing

YARN for job scheduling and cluster resource management

The HDFS splits files into large blocks that are distributed (and replicated) among the nodes in the cluster. For processing the data, MapReduce takes advantage of data locality by shipping code to the nodes that have the required data and processing the data in parallel.

Originally Hadoop cluster resource management was part of MapReduce because it was the main computing paradigm. Today the Hadoop ecosystem goes beyond MapReduce and includes many additional parallel computing framework, such as Apache Spark, Apache Tez, Apache Storm, etc. So the resource manager, referred to as YARN, was striped out from MapReduce and improved to support other computing framework in Hadoop v2. Now MapReduce is one kind of applications running in a YARN container and other types of applications can be written generically to run on YARN.

3.1 HDFS

Hadoop Distributed File System (HDFS) [44] is a multi-machine file system that runs on top of machines’ local file system but appears as a single namespace, accessible through hdfs:// URIs. It is designed to reliably store very large files across machines in a large cluster of inexpensive commodity hardware. HDFS closely follows the design of the Google File System (GFS) [38, 59].

3.1.1 Assumptions

An HDFS instance may consist of hundreds or thousands of nodes, which are made of inexpensive commodity components that often fail. It implies that some components are virtually not functional at any given time and some will not recover from their current failures. Therefore, constant monitoring, error detection, fault tolerance, and automatic recovery would have to be an integral part of the file system.

HDFS is tuned to support a modest number (tens of millions) of large files, which are typically gigabytes to terabytes in size. Initially, HDFS assumes a write-once-read-many access model for files. A file once created, written, and closed need not be changed. This assumption simplifies the data coherency problem and enables high throughput data access. The append operation was added later (single appender only) [42].

HDFS applications typically have large streaming access to their datasets. HDFS is mainly designed for batch processing rather than interactive use. The emphasis is on high throughput of data access rather than low latency.

3.1.2 Architecture

HDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. In addition, there are a number of DataNodes that manage storage attached to the nodes that they run on. A typical deployment has a dedicated machine that runs only the NameNode. Each of the other machines in the cluster runs one instance of the DataNode .

HDFS supports a traditional hierarchical file organization that consists of directories and files. In HDFS, each file is stored as a sequence of blocks (identified by 64 bit unique id); all blocks in a file except the last one are the same size (typically 64 MB). DataNodes store each block in a separate file on local file system and provide read/write access. When a DataNode starts up, it scans through its local file system and sends the list of hosted data blocks (called Blockreport) to the NameNode.

For reliability, each block is replicated on multiple DataNodes (three replicas by default). The placement of replicas is critical to HDFS reliability and performance. HDFS employs a rack-aware replica placement policy to improve data reliability, availability, and network bandwidth utilization. When the replication factor is three, HDFS puts one replica on one node in the local rack, another on a different node in the same rack, and the last on a node in a different rack. This policy reduces the inter-rack write traffic which generally improves write performance. Since the chance of rack failure is far less than that of node failure, this policy does not impact data reliability and availability notably.

The NameNode is the arbitrator and repository for all HDFS metadata. The NameNode executes common namespace operations such as create, delete, modify and list files and directories. The NameNode also performs the block management including mapping files to blocks, creating and deleting blocks, and managing replica placement and re-replication. Besides, the NameNode provides DataNode cluster membership by handling registrations and periodic heart beats. But the user data never flows through the NameNode.

To achieve high performance, the NameNode keeps all metadata in main memory including the file and block namespace, the mapping from files to blocks, and the locations of each block’s replicas. The namespace and file-to-block mapping are also kept persistent into the files EditLog and FsImage in the local file system of the NameNode. The file FsImage stores the entire file system namespace and file-to-block map. The EditLog is a transaction log to record every change that occurs to file system metadata, e.g. creating a new file and changing the replication factor of a file. When the NameNode starts up, it reads the FsImage and EditLog from disk, applies all the transactions from the EditLog to the in-memory representation of the FsImage, flushes out the new version of FsImage to disk, and truncates the EditLog.

Because the NameNode replays the EditLog and updates the FsImage only during start up, the EditLog could get very large over time and the next restart of NameNode takes longer. To avoid this problem, HDFS has a secondary NameNode that updates the FsImage with the EditLog periodically and keeps the EditLog within a limit. Note that the secondary NameNode is not a standby NameNode. It usually runs on a different machine from the primary NameNode since its memory requirements are on the same order as the primary NameNode.

The NameNode does not store block location information persistently. On startup, the NameNode enters a special state called Safemode and receives Blockreport messages from the DataNodes. Each block has a specified minimum number of replicas. A block is considered safely replicated when the minimum number of replicas has checked in with the NameNode. After a configurable percentage of safely replicated data blocks checks in with the NameNode (plus an additional 30 seconds), the NameNode exits the Safemode state.

3.1.3 Control and Data Flow

HDFS is designed such that clients never read and write file data through the NameNode. Instead, a client asks the NameNode which DataNodes it should contact using the class ClientProtocol through an RPC connection. Then the client communicates with a DataNode directly to transfer data using the DataTransferProtocol, which is a streaming protocol for performance reasons. Besides, all communication between Namenode and Datanode, e.g. DataNode registration, heartbeat, Blockreport, is initiated by the Datanode, and responded to by the Namenode.

3.1.3.1 Read

First, the client queries the NameNode with the file name, read range start offset, and the range length. The NameNode returns the locations of the blocks of the specified file within the specified range. Especially, DataNode locations for each block are sorted by the proximity to the client. The client then sends a request to one of the DataNodes, most likely the closest one.

3.1.3.2 Write

A client request to create a file does not reach the NameNode immediately. Instead, the client caches the file data into a temporary local file. Once the local file accumulates data worth over one block size, the client contacts the NameNode, which updates the file system namespace and returns the allocated data block location. Then the client flushes the block from the local temporary file to the specified DataNode. When a file is closed, the remaining last block data is transferred to the DataNodes.

3.1.4 The Small Files Problem

Big data but small files (significantly smaller than the block size) implies a lot of files, which creates a big problem for the NameNode [79]. Recall that the NameNode holds all the metadata of files and blocks in main memory. Given that each of the metadata object occupies about 150 bytes, the NameNode may host about 10 million files, each using a block, with 3 gigabytes of memory. Although larger memory can push the upper limit higher, large heap is a big challenge for JVM garbage collector. Furthermore, HDFS is not efficient to read small files because of the overhead of client-NameNode communication, too much disk seeks, and lots of hopping from DataNode to DataNode to retrieve each small file.

In order to reduce the number of files and thus the pressure on the NameNode’s memory, Hadoop Archives (HAR files) were introduced. HAR files, created by hadoop archive command, are special format archives that contain metadata and data files. The archive exposes itself as a file system layer. All of the original files are visible and accessible through a har:// URI. It is also easy to use HAR files as input file system in MapReduce. Note that it is actually slower to read through files in a HAR because of the extra access to metadata.

The SequenceFile, consisting of binary key-value pairs, can also be used to handle the small files problem, by using the filename as the key and the file contents as the value. This works very well in practice for MapReduce jobs. Besides, the SequenceFile supports compression, which reduces disk usage and speeds up data loading in MapReduce. Open source tools exist to convert tar files to SequenceFiles [74].

The key-value stores, e.g. HBase and Accumulo, may also be used to reduce file count although they are designed for much more complicated use cases. Compared to SequenceFile, they support random access by keys.

3.1.5 HDFS Federation

The existence of a single NameNode in a cluster greatly simplifies the architecture of the system. However, it also introduces problems. The file count problem, due to the limited memory of NameNode, is an example. A more serious problem is that it proved to be a bottleneck for the clients [59]. Even though the clients issue few metadata operations to the NameNode, there may be thousands of clients all talking to the NameNode at the same time. With multiple MapReduce jobs, we might suddenly have thousands of tasks in a large cluster, each trying to open a number of files. Given that the NameNode is capable of doing only a few thousand operations a second, it would take a long time to handle all those requests.

Since Hadoop 2.0, we can have two redundant NameNodes in the same cluster in an active/passive configuration with a hot standby. Although this allows a fast failover to a new NameNode for fault tolerance, it does not solve the the performance issue. To partially resolve the scalability problem, the concept of HDFS Federation, was introduced to allow multiple namespaces within a HDFS cluster. In the future, it may also support the cooperation across clusters.

In HDFS Federation, there are multiple independent NameNodes (and thus multiple namespaces). The NameNodes do not require coordination with each other. The DataNodes are used as the common storage by all the NameNodes by registering with and handles commands from all the NameNodes in the cluster. The failure of a NameNode does not prevent the DataNode from serving other NameNodes in the cluster.

Because multiple NameNodes run independently, there may be conflicts of 64 bit block ids generated by different NameNodes. To avoid this problem, a namespace uses one or more block pools, identified by a unique id in a cluster. A block pool belongs to a single namespace and does not cross namespace boundary. The extended block id, a tuple of (Block Pool ID, Block ID), is used for block identification in HDFS Federation.

3.1.6 Java API

HDFS is implemented in Java and provides a native Java API. To access HDFS in other programming languages, Thrift bindings are provided for Perl, Python, Ruby and PHP [12]. In what follows, we will discuss how to work with HDFS Java API with a couple of small examples. First of all, we need to add the following dependencies to the project’s Maven POM file [15].

< dependency >

< groupId > org . apache . hadoop </ groupId >

< artifactId > hadoop - common </ artifactId >

< version >2.6.0</ version >

</ dependency >

< dependency >

< groupId > org . apache . hadoop </ groupId >

< artifactId > hadoop - hdfs </ artifactId >

< version >2.6.0</ version >

</ dependency >

The main entry point of HDFS Java API is the abstract class FileSystem in the package org.apache.hadoop.fs that serves as a generic file system representation. FileSystem has various implementations:

DistributedFileSystem The implementation of distributed file system. This object is the way end-user code interacts with an HDFS. LocalFileSystem The local implementation for small Hadoop instances and for testing. FTPFileSystem A FileSystem backed by an FTP client. S3FileSystem A block-based FileSystem backed by Amazon S3.

The FileSystem class also serves as a factory for concrete implementations:

Configuration conf = new Configuration () ;

FileSystem fs = FileSystem . get ( conf ) ;

where the Configuration class passes the Hadoop configuration information such as scheme, authority, NameNode host and port, etc. Unless explicitly turned off, Hadoop by default specifies two resources, loaded in-order from the classpath:

core-default.xml Read-only defaults for Hadoop. core-site.xml Site-specific configuration for a given Hadoop installation.

Applications may add additional resources, which are loaded subsequent to these resources in the order they are added. With FileSystem, one can do common namespace operations, e.g. creating, deleting, and renaming files. We can also query the status of a file such as the length, block size, block locations, permission, etc. To read or write files, we need to use the classes FSDataInputStream and FSDataOutputStream. In the following example, we develop two simple functions to copy a local file into/from HDFS. For simplicity, we do not check the file existence or any I/O errors. Note that FileSystem does provide several utility functions for copying files between local and distributed file systems.

/⋆ ⋆ Copy a local file to HDFS ⋆/

public void copyFromLocal ( String src , String dst ) throws IOException {



Configuration conf = new Configuration () ;

FileSystem fs = FileSystem . get ( conf ) ;



// The Path object represents a file or directory in HDFS .

FSDataOutputStream out = fs . create ( new Path ( dst ) ) ;

InputStream in = new BufferedInputStream ( new FileInputStream ( new File ( src ) ) ) ;



byte [] b = new byte [1024];

int numBytes = 0;

while (( numBytes = in . read ( b ) ) > 0) {

out . write ( b , 0, numBytes ) ;

}



in . close () ;

out . close () ;

fs . close () ;

}



/⋆ ⋆ Copy an HDFS file to local file system ⋆/

public void copyToLocal ( String src , String dst ) throws IOException {



Configuration conf = new Configuration () ;

FileSystem fs = FileSystem . get ( conf ) ;



FSDataInputStream in = fs . open ( new Path ( src ) ) ;

OutputStream out = new BufferedOutputStream ( new FileOutputStream ( new File ( dst ) ) ) ;

byte [] b = new byte [1024];

int numBytes = 0;

while (( numBytes = in . read ( b ) ) > 0) {

out . write ( b , 0, numBytes ) ;

}



in . close () ;

out . close () ;

fs . close () ;

}

In the example, we use the method FileSystem.create to create an FSDataOutputStream at the indicated Path. If the file exists, it will be overwritten by default. The Path object is used to locate a file or directory in HDFS. Path is really a URI. For HDFS, it takes the format of hdfs://host: port/location. To read an HDFS file, we use the method FileSystem.open that returns an FSDataInputStream object. The rest of example is just as the regular Java I/O stream operations.

3.1.7 Data Ingestion

Today, most data are generated and stored out of Hadoop, e.g. relational databases, plain files, etc. Therefore, data ingestion is the first step to utilize the power of Hadoop. To move the data into HDFS, we do not have to do the low level programming as the previous example. Various utilities have been developed to move data into Hadoop.

3.1.7.1 Batch Data Ingestion

The File System Shell [8] includes various shell-like commands, including copyFromLocal and copyToLocal, that directly interact with the HDFS as well as other file systems that Hadoop supports. Most of the commands in File System Shell behave like corresponding Unix commands. When the data files are ready in local file system, the shell is a great tool to ingest data into HDFS in batch. In order to stream data into Hadoop for real time analytics, however, we need more advanced tools, e.g. Apache Flume and Apache Chukwa.

3.1.7.2 Streaming Data Ingestion

Apache Flume [9] is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data into HDFS. It has a simple and flexible architecture based on streaming data flows; and robust and fault tolerant with tunable reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model that allows for online analytic application. Flume employs the familiar producer-consumer model. Source is the entity through which data enters into Flume. Sources either actively poll for data or passively wait for data to be delivered to them. On the other hand, Sink is the entity that delivers the data to the destination. Flume has many built-in sources (e.g. log4j and syslogs) and sinks (e.g. HDFS and HBase). Channel is the conduit between the Source and the Sink. Sources ingest events into the channel and the sinks drain the channel. Channels allow decoupling of ingestion rate from drain rate. When data are generated faster than what the destination can handle, the channel size increases.

Apache Chukwa [7] is devoted to large-scale log collection and analysis, built on top of MapReduce framework. Beyond data ingestion, Chukwa also includes a flexible and powerful toolkit for displaying monitoring and analyzing results. Different from Flume, Chukwa is not a a continuous stream processing system but a mini-batch system.

Apache Kafka [13] and Apache Storm [18] may also be used to ingest streaming data into Hadoop although they are mainly designed to solve different problems. Kafka is a distributed publish-subscribe messaging system. It is designed to provide high throughput persistent messaging that’s scalable and allows for parallel data loads into Hadoop. Storm is a distributed realtime computation system for use cases such as realtime analytics, online machine learning, continuous computation, etc.

3.1.7.3 Structured Data Ingestion

Apache Sqoop [17] is a tool designed to efficiently transfer data between Hadoop and relational databases. We can use Sqoop to import data from a relational database table into HDFS. The import process is performed in parallel and thus generates multiple files in the format of delimited text, Avro, or SequenceFile. Besides, Sqoop generates a Java class that encapsulates one row of the imported table, which can be used in subsequent MapReduce processing of the data. Moreover, Sqoop can export the data (e.g. the results of MapReduce processing) back to the relational database for consumption by external applications or users.

3.2 MapReduce

Distributed parallel computing is not new. Supercomputers have been using MPI [34] for years for complex numerical computing. Although MPI provides a comprehensive API for data transfer and synchronization, it is not very suitable for big data. Due to the large data size and shared-nothing architecture for scalability, data distribution and I/O are critical to big data analytics while MPI almost ignores it . On the other hand, many big data analytics are conceptually straightforward and does not need very complicated communication and synchronization mechanism. Based on these observations, Google invented MapReduce [30] to deal the issues of how to parallelize the computation, distribute the data, and handle failures.

3.2.1 Overview

In a shared-nothing distributed computing environment, a computation is much more efficient if it is executed near the data it operates on. This is especially true when the size of the data set is huge as it minimizes network traffic and increases the overall throughput of the system. Therefore, it is often better to migrate the computation closer to where the data is located rather than moving the data to where the application is running. With GFS/HDFS, MapReduce provides such a parallel programming framework.

Inspired by the map and reduce functions commonly used in functional programming, a MapReduce program is composed of a Map() procedure that performs transformation and a Reduce() procedure that takes the shuffled output of Map as input and performs a summarization operation. More specifically, the user-defined Map function processes a key-value pair to generate a set of intermediate key-value pairs, and the Reduce function aggregates all intermediate values associated with the same intermediate key.

MapReduce applications are automatically parallelized and executed on a large cluster of commodity machines. During the execution, the Map invocations are distributed across multiple machines by automatically partitioning the input data into a set of M splits. The input splits can be processed in parallel by different machines. Reduce invocations are distributed by partitioning the intermediate key space into R pieces using a partitioning function. The number of partitions and the partitioning function are specified by the user. Besides partitioning the input data and running the various tasks in parallel, the framework also manages all communications and data transfers, load balance, and fault tolerance.

MapReduce provides programmers a really simple parallel computing paradigm. Because of automatic parallelization, no explicit handling of data transfer and synchronization in programs, and no deadlock, this model is very attractive. MapReduce is also designed to process very large data that is too big to fit into the memory (combined from all nodes). To achieve that, MapReduce employs a data flow model, which also provides a simple I/O interface to access large amount of data in distributed file system. It also exploits data locality for efficiency. In most cases, we do not need to worry about I/O at all.

3.2.2 Data Flow

For a given task, the MapReduce system runs as follows

Prepare the Map() input The system splits the input files into M pieces and then starts up M Map workers on a cluster of machines. Run the user-defined Map() code The Map worker parses key-value pairs out of the assigned split and passes each pair to the user-defined Map function. The intermediate key-value pairs produced by the Map function are buffered in memory. Periodically, the buffered pairs are written to local disk, partitioned into R regions for sharding purposes by the partitioning function (called partitioner) that is given the key and the number of reducers R and returns the index of the desired reducer. Shuffle the Map output to the Reduce processors When ready, a reduce worker reads remotely the buffered data from the local disks of the map workers. When a reduce worker has read all intermediate data, it sorts the data by the intermediate keys so that all occurrences of the same key are grouped together. Typically many different keys map to the same reduce task. Run the user-defined Reduce() code The reduce worker iterates over the sorted intermediate data and for each unique intermediate key encountered, it passes the key and the corresponding set of intermediate values to the user’s Reduce function. Produce the final output The final output is available in the R output files (one per reduce task).

Optionally, a combiner can be used between map and reduce as an optimization. The combiner function runs on the output of the map phase and is used as a filtering or an aggregating step to lessen the data that are being passed to the reducer. In most of the cases the reducer class is set to be the combiner class so that we can save network time. Note that this works only if reduce function is commutative and associative.

In practice, one should pay attention to the task granularity, i.e. the number of map tasks M and the number of reduce tasks R. In general, M should be much larger than the number of nodes in cluster, which improves load balancing and speeds recovery from worker failure. The right level of parallelism for maps seems to be around 10-100 maps per node (maybe more for very cpu light map tasks). Besides, the task setup takes awhile. On a Hadoop cluster of 100 nodes, it takes 25 seconds until all nodes are executing the job. So it is best if the maps take at least a minute to execute. In Hadoop, one can call JobConf.setNumMapTasks(int) to set the number of map tasks. Note that it only provides a hint to the framework.

The number of reducers is usually a small multiple of the number of nodes. The right factor number seems to be 0.95 for well-balanced data (per intermediate key) or 1.75 otherwise for better load balancing. Note that we reserve a few reduce slots for speculative tasks and failed tasks. We can set the number of reduce tasks by JobConf.setNumReduceTasks(int) in Hadoop and the framework will honor it. It is fine to set R to zero if no reduction is desired.

3.2.3 Secondary Sorting

The output of Mappers is firstly sorted by the intermediate keys. However, we do want to sort the intermediate values (or some fields of intermediate values) sometimes, e.g. calculating the stock price moving average where the key is the stock ticker and the value is a pair of timestamp and stock price. If the values of a given key are sorted by the timestamp, we can easily calculate the moving average with a sliding window over the values. This problem is called secondary sorting.

A direct approach to secondary sorting is for the reducer to buffer all of the values for a given key and do an in-memory sort. Unfortunately, it may cause the reducer to run out of memory.

Alternatively, we may use a composite key that has multiple parts. In the case of calculating moving average, we may create a composite key of (ticker, timestamp) and also provide a customized sort comparator (subclass of WritableComparator) that compares ticker and then timestamp. To ensure only the ticker (referred as natural key) is considered when determining which reducer to send the data to, we need to write a custom partitioner (subclass of Partitioner) that is solely based on the natural key. Once the data reaches a reducer, all data is grouped by key. Since we have a composite key, we need to make sure records are grouped solely by the natural key by implementing a group comparator (another subclass of WritableComparator) that considers only the natural key.

3.2.4 Examples

Hadoop implements MapReduce in Java. To create a MapReduce program, please add the following dependencies to the project’s Maven POM file.

< dependency >

< groupId > org . apache . hadoop </ groupId >

< artifactId > hadoop - common </ artifactId >

< version >2.6.0</ version >

</ dependency >

< dependency >

< groupId > org . apache . hadoop </ groupId >

< artifactId > hadoop - mapreduce - client - core </ artifactId >

< version >2.6.0</ version >

</ dependency >

< dependency >

< groupId > org . apache . hadoop </ groupId >

< artifactId > hadoop - hdfs </ artifactId >

< version >2.6.0</ version >

</ dependency >

3.2.4.1 Sort

The essential part of the MapReduce framework is a large distributed sort. So we just let the framework do the job in this case while the map is as simple as emitting the sort key and original input. In the below example, we just assume the input key is the sort key. The reduce operator is an identity function.

public class SortMapper extends Mapper < IntWritable , Text , IntWritable , Text > {



public void map ( IntWritable key , Text value , Context context )

throws IOException , InterruptedException {

context . write ( key , value ) ;

}

}



public class SortReducer extends Reducer < IntWritable , Text , IntWritable , Text > {



public void reduce ( IntWritable key , Iterable < Text > values , Context context )

throws IOException , InterruptedException {

for ( Text value : values ) {

context . write ( key , value ) ;

}

}

}

Although this example is extremely simple, there are many important classes to understand. The MapReduce framework takes key-value pairs as the input and produces a new set of key-value pairs (maybe of different types). The key and value classes have to be serializable by the framework and hence need to implement the Writable interface. Additionally, the key classes have to implement the WritableComparable interface to facilitate sorting by the framework.

The map method of Mapper implementation processes one key-value pair in the input split at a time. The reduce method of Reducer implementation is called once for each intermediate key and associate group of values. In this case, we do not have to override the map and reduce methods because the default implementation is actually an identity function. The sample code is mainly to show the interface. Both Mapper and Reducer emit their output through the Context object provided by the framework.

To submit a MapReduce job to Hadoop, we need to do the below steps. First, the application describes various facets of the job via Job object. Job is typically used to specify the Mapper, Reducer, InputFormat, OutputFormat implementations, the directories of input files and the location of output files. Optionally, one may specify advanced facets of the job such as the Combiner, Partitioner, Comparator, and DistributedCache, etc. Then the application submits the job to the cluster by the method waitForCompletion(boolean verbose) and wait for it to finish. Job also allows the user to control the execution and query the state.

public class MapReduceSort {

public static void main ( String [] args ) throws Exception {

Configuration conf = new Configuration () ;

Job job = Job . getInstance ( conf , " sort " ) ;

job . setJarByClass ( MapReduceSort . class ) ;

job . setMapperClass ( SortMapper . class ) ;

job . setReducerClass ( SortReducer . class ) ;

job . setOutputKeyClass ( IntWritable . class ) ;

job . setOutputValueClass ( Text . class ) ;



FileInputFormat . addInputPath ( job , new Path ( args [0]) ) ;

FileOutputFormat . setOutputPath ( job , new Path ( args [1]) ) ;



System . exit ( job . waitForCompletion ( true ) ? 0 : 1) ;

}

}

3.2.4.2 Grep

The map function emits a line if it matches a given pattern. The reduce part is not necessary in this case and we can simply set the number of reduce tasks zero (job.setNumReduceTasks(0)). Note that the Mapper implementation also overrides the setup method, which will be called once at the beginning of the task. In this case, we use it to set the search pattern from the job configuration. This is also a good example of passing small configuration data to MapReduce tasks. To pass large amount of read-only data to tasks, DistributedCache is preferred and will be discussed later in the case of Inner Join. Similar to setup, one may also overrides the cleanup method, which will be called once at the end of the task.

public class GrepMapper < K > extends Mapper < K , Text , K , Text > {



public static String PATTERN = " mapper . pattern " ;

private Pattern pattern ;



// Setup the match pattern from job context .

// Called once at the beginning of the task .

public void setup ( Context context ) {

Configuration conf = context . getConfiguration () ;

pattern = Pattern . compile ( conf . get ( PATTERN ) ) ;

}



public void map ( K key , Text value , Context context )

throws IOException , InterruptedException {

if ( pattern . matcher ( value . toString () ) . find () ) {

context . write ( key , value ) ;

}

}

}

In a relational database, one can achieve this by the following simple query in SQL.

SELECT ⋆ FROM T_KV WHERE value LIKE ' % XYZ % ' ;

Although this query requires a full table scan, a parallel DMBS can easily outperformance MapReduce in this case. It is because the setup cost of MapReduce is high. The performance gap will be much larger in case that an index can be used such as

SELECT ⋆ FROM T_PERSON WHERE age > 30;

3.2.4.3 Aggregation

Aggregation is a simple analytic calculation such as counting the number of access or users from different countries. WordCount, the “hello world” program in the MapReduce world, is an example of aggregation. WordCount simply counts the number of occurrences of each word in a given input set. The Mapper splits the input line into words and emits a key-value pair of <word, 1>. The Reducer just sums up the values. For the sample code, please refer Hadoop’s MapReduce Tutorial [14].

For SQL, aggregation simply means GROUP BY such as the following example:

SELECT country , count (⋆) FROM T_WEB_LOG GROUP BY country ;

With a combiner, the aggregation in MapReduce works pretty much same as in a parallel DBMS. Of course, a DBMS can still benefit a lot from an index on the group by field.

3.2.4.4 Inner Join

An inner join operation combines two data sets, A and B, to produce a third one containing all record pairs from A and B with matching attribute value. The sort-merge join algorithm and hash-join algorithm are two common alternatives to implement the join operation in a parallel data flow environment [32]. In sort-merge join, both A and B are sorted by the join attribute and then compared in sorted order. The matching pairs are inserted into the output stream. The hash-join first prepares a hash table of the smaller data set with the join attribute as the hash key. Then we scan the larger dataset and find the relevant rows from the smaller dataset by searching the hash table.

There are several ways to implement join in MapReduce, e.g. reduce-side join and map-side join. The reduce-side join is a straightforward approach that takes advantage of that identical keys are sent to the same reducer. In the reduce-side join, the output key of Mapper has to be the join key so that they reach the same reducer. The Mapper also tags each dataset with an identity to differentiate them in the reducer. With secondary sorting on the dataset identity, we ensure the order of values sent to the reducer, which generates the matched pairs for each join key. Because two datasets are usually in different formats, we can use the class MultipleInputs to setup different InputFormat and Mapper for each input path. The reduce-side join belongs to the sort-merge join family and scales very well for large datasets. However, it may be less efficient in the case of data skew where a dataset is significantly smaller than the other.

If one dataset is small enough to fit into the memory, we may use the memory-based map-side join. In this approach, the Mappers side-load the smaller dataset and build a hash table of it during the setup, and process the rows of the larger dataset one-by-one in the map function. To efficiently load the smaller dataset in every Mapper, we should use the DistributedCache. The DistributedCache is a facility to cache application-specific large, read-only files. An application specifies the files to be cached by Job.addCacheFile(URI). The MapReduce framework will copy the necessary files on to the slave node before any tasks for the job are executed on that node. This is much more efficient than that copying the files for each Mapper. Besides, we can declare the hash table as a static field so that the tasks running successively in a JVM will share the data using the task JVM reuse feature. Thus, we only need to load the data only once for each JVM.

The above map-side join is fast but only works when the smaller dataset fits in the memory. To avoid this pitfall, we can use the multi-phrase map-side join. First we run a MapReduce job on each dataset that uses the join attribute as the Mapper’s and Reducer’s output key and have the same number of reducers for all datasets. In this way, all datasets are sorted by the join attribute and have the same number of partitions. In second phrase, we use CompositeInputFormat as the input format. The CompositeInputFormat performs joins over a set of data sources sorted and partitioned the same way, which is guaranteed by the first phrase. So the records are already merged before they reach the Mapper, which simplify outputs the joins to the stream.

Because the join implementation is fairly complicated, we will not show the sample code here. In practice, one should use higher level tools such as Hive or Pig to join data sets rather than reinventing the wheel.

In practice, join, aggregation, and sort are frequently used together, e.g. finding the client of the ad that generates the most revenue (or clicks) during a period. In MapReduce, this has to be done in multiple phases. The first phrase filters the data base on the click timestamp and joins the client and click log datasets. The second phrase does the aggregation on the output of join and the third one finishes the task by sorting the output of aggregation.

Various benchmarks shows that parallel DBMSs are way faster than MapReduce for joins [65]. Again an index on the join key is very helpful. But more importantly, joins can be done locally on each node if both tables are partitioned by the join key so that no data transfer is needed before the join.

3.2.4.5 K-Means Clustering

The k-means clustering is a simple and widely used method that partitions data into k clusters in which each record belongs to the cluster with the nearest center, serving as a prototype of the cluster [51]. The most common algorithm for k-means clustering is Lloyd’s algorithm that iteratively proceeds by alternating between two steps. The assignment step assigns each sample to the cluster of nearest mean. The update step calculates the new means to be the centroids of the samples in the new clusters. The algorithm converges when the assignments no longer change. The algorithm can be naturally implemented in the MapReduce framework where each iteration will be a MapReduce job.

Input The data files as regular MapReduce input and cluster center files side-loaded by DistributedCache. Initially, the cluster centers may be random selected. Map With side-loaded cluster centers, each sample input is mapped to a cluster of nearest mean. The emitted key-value pair is < cluster id, sample vector > . Combine In order to reduce the data passed to the reducer, we may have a combiner that aggregates samples belonging to the same cluster. Reduce The reduce tasks recalculate the new means of clusters as the centroids of samples in the new clusters. The output of new cluster means will be used as the input to next iteration. Iterate This process is repeated until the algorithm converges or reaches the maximum number of iterations. Output Runs a map only job to output the cluster assignment.

Such an implementation is very scalable. it can handle very large data size, which may be even larger than the combined memory of the cluster. On the other hand, it is not very efficient because the input data have to been read again and again for each iteration. This is a general performance issue for MapReduce to implement iterative algorithms.

3.2.5 Shortcomings

The above examples show that MapReduce is capable of a variety of tasks. On the other hand, they also demonstrate several drawbacks of MapReduce.

3.2.5.1 Performance

MapReduce provides a scalable programming model on large clusters. However, it is not guaranteed to be fast due to many reasons:

Even though Hadoop now reuses JVM instances for map and reduce tasks, the startup time is still significant on large clusters. The high startup cost means that MapReduce is mainly suitable for long run batch jobs.

The communication between map and reduce tasks always are done by remote file access, which actually often dominates the computation cost. Such a pulling strategy is great for fault tolerance, but it results in low performance compared to the push mechanism. Besides there could be M * R intermediate files. Given large M and R, it is certainly a challenge for underlying file system. With multiple reducers running simultaneously, it is highly likely that some of them will attempt to read from the same map node at the same time, inducing a large number of disk seeks and slowing the effective disk transfer rate.

Iterative algorithms perform poorly on MapReduce because of reading input data again and again. Data also must be materialized and replicated on the distributed file system between successive jobs.

3.2.5.2 Low Level Programming Interface

A major goal of MapReduce is to provide a simple programming model that application developers need only to write the map and reduce parts of the program. However, practical programmers have to take care of a lot things such as input/output format, partition functions, comparison functions, combiners, and job configuration to achieve good performance. As shown in the example, even a very simple grep MapReduce program is fairly long. On the other hand, the same query in SQL is much shorter and cleaner.

MapReduce is independent of the underlying storage system. It’s application developers’ duty to organize data such as building and using any index, partitioning and collocating related data sets, etc. Unfortunately, these are not easy tasks in the context of HDFS and MapReduce.

3.2.5.3 Limited Parallel Computing Model

The simple computing model of MapReduce brings us no explicit handling of data transfer and synchronization in programs, and no deadlock. But it is a limited parallel computing model, essentially a scatter-gather processing model. For non-trivial algorithms, programmers try hard to “MapReducize” them, often in a non-intuitive way.

After years of practice, the community has realized these problems and tries to address them in different ways. For example, Apache Spark aims on the speed by keeping data in memory. Apache Pig provides a DSL and Hive provides a SQL dialect on the top of MapReduce to ease the programming. Google Dremel and Cloudera Impala target on interactive analysis with SQL queries. Microsoft Dryad/Apache Tez provides a more general parallel computing framework that models computations in DAGs. Google Pregel and Apache Giraph concerns computing problems on large graphs. Apache Storm focuses on real time event processing. We will look into all of them in the rest of book. First, we will check out Tez and Spark in this chapter.

3.3 Tez

MapReduce provides a scatter-gather parallel computing model, which is very limited. Dryad, a research project at Microsoft Research, attempted to support a more general purpose runtime for parallel data processing [50]. A Dryad job is a directed acyclic graph (DAG) where each vertex is a program and edges represent data channels (files, TCP pipes, or shared-memory FIFOs). The DAG defines the data flow of the application, and the vertices of the graph defines the operations that are to be performed on the data. It is a logical computation graph that is automatically mapped onto physical resources by the runtime. Dryad includes a domain-specific language, in C++ as a library using a mixture of method calls and operator overloading, that is used to create and model a Dryad execution graph. Dryad is notable for allowing graph vertices to use an arbitrary number of inputs and outputs, while MapReduce restricts all computations to take a single input set and generate a single output set. Although Dryad provides a nice alternative to MapReduce, Microsoft discontinued active development on Dryad, shifting focus to the Apache Hadoop framework in October 2011.

Interestingly, the Apache Hadoop community recently picked up the idea of Dryad and developed Apache Tez [19, 70], a new runtime framework on YARN, during the Stinger initiative of Hive [36]. Similar to Dryad, Tez is an application framework which allows for a complex directed-acyclic-graph of tasks for processing data. Edges of data flow graph determine how the data is transferred and the dependency between the producer and consumer vertices. Edge properties enable Tez to instantiate user tasks, configure their inputs and outputs, schedule them appropriately and define how to route data between the tasks. The edge properties include:

Data movement determines routing of data between tasks. One-To-One: Data from the i th producer task routes to the i th consumer task.

producer task routes to the consumer task. Broadcast: Data from a producer task routes to all consumer tasks.

Scatter-Gather: Producer tasks scatter data into shards and consumer tasks gather the shards. The i th shard from all producer tasks routes to the i th consumer task. Scheduling determines when a consumer task is scheduled. Sequential: Consumer task may be scheduled after a producer task completes.

Concurrent: Consumer task must be co-scheduled with a producer task. Data source determines the lifetime/reliability of a task output. Persisted: Output will be available after the task exits. Output may be lost later on.

Persisted-Reliable: Output is reliably stored and will always be available.

Ephemeral: Output is available only while the producer task is running.

For example, MapReduce would be expressed with the scatter-gather, sequential and persisted edge properties.

The vertex in the data flow graph defines the user logic that transforms the data. Tez models each vertex as a composition of Input, Processor and Output modules. Input and Output determine the data format and how and where it is read/written. An input represents a pipe through which a processor can accept input data from a data source such as HDFS or the output generated by another vertex, while an output represents a pipe through which a processor can generate output data for another vertex to consume or to a data sink such as HDFS. Processor holds the data transformation logic, which consumes one or more Inputs and produces one or more Outputs.

The Tez runtime expands the logical graph into a physical graph by adding parallelism at the vertices, i.e. multiple tasks are created per logical vertex to perform the computation in parallel. A logical edge in a DAG is also materialized as a number of physical connections between the tasks of two connected vertices. Tez also supports pluggable vertex management modules to collect information from tasks and change the data flow graph at runtime to optimize performance and resource usage.

With Tez, Apache Hive is now able to process data in a single Tez job, which may take multiple MapReduce jobs. If the data processing is too complicated to finish in a single Tez job, Tez session can encompass multiple jobs by leveraging common services. This provides additional performance optimizations.

Like MapReduce, Tez is still a lower-level programming model. To obtain good performance, the developer must understand the structure of the computation and the organization and properties of the system resources.

3.4 YARN

Originally, Hadoop was restricted mainly to the paradigm MapReduce, where the resource management is done by JobTracker and TaskTacker. The JobTracker farms out MapReduce tasks to specific nodes in the cluster, ideally the nodes that have the data, or at least are in the same rack. A TaskTracker is a node in the cluster that accepts tasks - Map, Reduce and Shuffle operations - from a JobTracker. Because Hadoop has stretched beyond MapReudce (e.g. HBase, Storm, etc.), Hadoop now architecturally decouples the resource management features from the programming model of MapReduce, which makes Hadoop clusters more generic. The new resource manager is referred to as MapReduce 2.0 (MRv2) or YARN [43]. Now MapReduce is one kind of applications running in a YARN container and other types of applications can be written generically to run on YARN.

YARN employs a master-slave model and includes several components:

The global Resource Manager is the ultimate authority that arbitrates resources among all applications in the system.

The per-application Application Master negotiates resources from the Resource Manager and works with the Node Managers to execute and monitor the component tasks.

The per-node slave Node Manager is responsible for launching the applications’ containers, monitoring their resource usage and reporting to the Resource Manager.

The Resource Manager, consisting of Scheduler and Application Manager, is the central authority that arbitrates resources among various competing applications in the cluster. The Scheduler is responsible for allocating resources to the various running applications subject to the constraints of capacities, queues etc. The Application Manager is responsible for accepting job-submissions, negotiating the first container for executing the application specific Application Master and provides the service for restarting the Application Master container on failure.

The Scheduler uses the abstract notion of a Resource Container which incorporates elements such as memory, CPU, disk, network etc. Initially, YARN uses the memory-based scheduling. Each node is configured with a set amount of memory and applications request containers for their tasks with configurable amounts of memory. Recently, YARN added CPU as a resource in the same manner. Nodes are configured with a number of “virtual cores” (vcores) and applications give a vcore number in the container request.

The Scheduler has a pluggable policy plug-in, which is responsible for partitioning the cluster resources among the various queues, applications etc. For example, the Capacity Scheduler is designed to maximize the throughput and the utilization of shared, multi-tenant clusters. Queues are the primary abstraction in the Capacity Scheduler. The capacity of each queue specifies the percentage of cluster resources that are available for applications submitted to the queue. Furthermore, queues can be set up in a hierarchy. YARN also sports a Fair Scheduler that tries to assign resources to applications such that all applications get an equal share of resources over time on average using dominant resource fairness [39].

The protocol between YARN and applications is as follows. First an Application Submission Client communicates with the Resource Manager to acquire a new Application Id. Then it submit the Application to be run by providing sufficient information (e.g. the local files/jars, command line, environment settings, etc.) to the Resource Manager to launch the Application Master. The Application Master is then expected to register itself with the Resource Manager and request for and receive containers. After a container is allocated to it, the Application Master communicates with the Node Manager to launch the container for its task by specifying the launch information such as command line specification, environment, etc. The Application Master also handles failures of job containers. Once the task is completed, the Application Master signals the Resource Manager.

As the central authority of the YARN cluster, the Resource Manager is also the single point of failure (SPOF). To make it fault tolerant, an Active/Standby architecture can be employed since Hadoop 2.4. Multiple Resource Manager instances (listed in the configuration file yarn-site.xml) can be brought up but only one instance is Active at any point of time while others are in Standby mode. When the Active goes down or becomes unresponsive, another Resource Manager is automatically elected by a ZooKeeper-based method to be the Active. ZooKeeper is a replicated CP key-value store, which we will discuss in details later. Clients, Application Masters and Node Managers try connecting to the Resource Managers in a round-robin fashion until they hit the new Active.

Chapter 4

Spark

4.1 RDD

An RDD is a read-only collection of objects partitioned across a cluster of computers that can be operated on in parallel. A Spark application consists of a driver program that creates RDDs from HDFS files or an existing Scala collection. The driver program may transform an RDD in parallel by invoking supported operations with user-defined functions, which returns another RDD. The driver can also persist an RDD in memory, allowing it to be reused efficiently across parallel operations. In fact, the semantics of RDDs are way more than just parallelization:

Abstract The elements of an RDD does not have to exist in physical memory. In this sense, an element of RDD is an expression rather than a value. The value can be computed by evaluating the expression when necessary. Lazy and Ephemeral One can construct an RDD from a file or by transforming an existing RDD such as map() , filter() , groupByKey() , reduceByKey() , join() , cogroup() , cartesian() , etc. However, no real data loading or computation happens at the time of construction. Instead, they are materialized on demand when they are used in some operation, and are discarded from memory after use. Caching and Persistence We can cache a dataset in memory across operations, which allows future actions to be much faster. Caching is a key tool for iterative algorithms and fast interactive use cases. Caching is actually one special case of persistence that allows different storage levels, e.g. persisting the dataset on disk, persisting it in memory but as serialized Java objects (to save space), replicating it across nodes, or storing it off-heap in Tachyon 75] StorageLevel object to persist() . The cache() method is a shorthand for using the default storage level StorageLevel.MEMORY _ONLY (store deserialized objects in memory). Fault Tolerant If any partition of an RDD is lost, it will automatically be recomputed using the transformations that originally created it.

The operations on RDDs take user-defined functions, which are closures in functional programming as Spark is implemented in Scala. A closure can refer to variables in the scope when created, which will be copied to the workers when Spark runs a closure. Spark optimizes this process by shared variables for a couple of cases:

Broadcast variables If a large read-only data is used in multiple operations, it is better to copy it to the workers only once. Similar to the idea of DistributedCache, this can be achieved by broadcast variables that are created from a variable v by calling SparkContext.broadcast(v) . Accumulators Accumulators are variables that are only “added” to through an associative operation and can therefore be efficiently supported in parallel. They can be used to implement counters or sums. Only the driver program can read the accumulator’s value. Spark natively supports accumulators of numeric types.

By reusing cached data in RDDs, Spark offers great performance improvement over MapReduce (10x ~ 100x faster). Thus, it is very suitable for iterative machine learning algorithms. Similar to MapReduce, Spark is independent of the underlying storage system. It is application developers’ duty to organize data such as building and using any index, partitioning and collocating related data sets, etc. These are critical for interactive analytics. Merely caching is insufficient and not effective for extremely large data.

4.2 Implementation

The RDD object implements a simple interface, which consists of three operations:

getPartitions returns a list of partition IDs. getIterator(partition) iterates over a partition. getPreferredLocations(partition) is used to achieve data locality.

When a parallel operation is invoked on a dataset, Spark creates a task to process each partition of the dataset and sends these tasks to worker nodes. Spark tries to send each task to one of its preferred locations. Once launched on a worker, each task calls getIterator to start reading its partition.

4.3 API

Spark is implemented in Scala and provides high-level APIs in Scala, Java, and Python. The following examples are in Scala. A Spark program needs to create a SparkContext object:

val conf = new SparkConf () . setAppName ( appName ) . setMaster ( master )

val sc = new SparkContext ( conf )

The appName parameter is a name for your application to show on the cluster UI and the master is a cluster URL or a special “local” string to run in local mode.

Then we can create RDDs from any storage source supported by Hadoop. Spark supports text files, SequenceFiles, etc. Text file RDDs can be created using SparkContext’s textFile method. This method takes an URI for the file (directories, compressed files, and wildcards as well) and reads it as a collection of lines.

val lines = sc . textFile ( " data . txt " )

We can create a new RDD by transforming from an existing one, such as map, flatMap, filter, etc. We can also aggregate all the elements of an RDD using some function, e.g. reduce, reduceByKey, etc.

val lengths = lines . map ( s => s . length )

Beyond the basic operations such as map and reduce, Spark also provides advanced operations such as union, intersection, join, cogroup, which creates a new dataset from two existing RDDs. All these operations take a functions from the driver program to run on the cluster. Thanks to the functional features of Scala, the code is a lot simpler and cleaner than MapReduce as shown in the example.

As we discussed, RDDs are lazy and ephemeral. If we need to access an RDD multiple times, it is better to persist it in memory using the persist (or cache) method.

lengths . persist

Spark also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming for event processing. We will discuss these technologies later in related chapters.

Chapter 5

Analytics and Data Warehouse

5.1 Pig

Different from many other projects that bring SQL to Hadoop, Pig is special in that it provides a procedural (data flow) programming language Pig Latin as it was designed for experienced programmers. However, SQL programmers won’t have difficulties to understand Pig Latin programs because most statements just look like SQL clauses.

A Pig Latin program is a sequence of steps, each of which carries out a single data processing at fairly high level, e.g. loading, filtering, grouping, etc. The input data can be loaded from the file system or HBase by the operator LOAD:

grunt > persons = LOAD ' person . csv ' USING PigStorage ( ' , ' ) AS ( name : chararray , age : int , address : ( street : chararray , city : chararray , state : chararray , zip : int ) ) ;

where grunt > is the prompt of Pig console and PigStorage is a built-in deserializer for structured text files. Various deserializers are available. User defined functions (UDFs) can also be used to parse data in unsupported format. The AS clause defines a schema that assigns names to fields and declares types for fields. Although schemas are optional, programmer are encouraged to use them whenever possible. Note that such a “schema on read” is very different from the relational approach that requires rigid predefined schemas. Therefore, there is no need copying or reorganizing the data.

Pig has a rich data model. Primitive data types include int, long, float, double, chararray, bytearray, boolean, datetime, biginteger and bigdecimal. And complex data types include tuple, bag (a collection of tuples), and map (a set of key value pairs). Different from relational model, the fields of tuples can be any data types. Similarly, the map values can be any types (the map key is always type chararray). That is, nested data structures are supported.

Once the input data have been specified, there is a rich set of relational operators to transform them. The FOREACH...GENERATE operator, corresponding to the map tasks of MapReduce, produces a new bag by projection, applying functions, etc.

grunt > flatten_persons = FOREACH persons GENERATE name , age , FLATTEN ( address ) ;

where FLATTEN is a function to remove one level of nesting. With the operator DESCRIBE, we can see the schema difference between persons and flatten_persons:

grunt > DESCRIBE persons ;

persons : { name : chararray , age : int , address : ( street : chararray , city : chararray , state : chararray , zip : int ) }

grunt > DESCRIBE flatten_persons ;

flatten_persons : { name : chararray , age : int , address :: street : chararray , address :: city : chararray , address :: state : chararray , address :: zip : int }

Frequently, we want to filter the data based on some condition.

grunt > adults = FILTER flatten_persons BY age > 18;

Aggregations can be done by GROUP operator, which corresponds to the reduce tasks in MapReduce.

grunt > grouped_by_state = GROUP flatten_persons BY state ;

grunt > DESCRIBE grouped_by_state ;

grouped_by_state : { group : chararray , flatten_persons : {( name : chararray , age : int , address :: street : chararray , address :: city : chararray , address :: state : chararray , address :: zip : int ) }}

The result of a GROUP operation is a relation that includes one tuple per group of two fields:

The first field is named “group” and is the same type as the group key. The second field takes the name of the original relation and is type bag. We can also cogroup two or more relations.

grunt > cogrouped_by_name = COGROUP persons BY name , flatten_persons BY name ;

grunt > DESCRIBE cogrouped_by_name ;

cogrouped_by_name : { group : chararray , persons : {( name : chararray , age : int , address : ( street : chararray , city : chararray , state : chararray , zip : int ) ) }, flatten_persons : {( name : chararray , age : int , address :: street : chararray , address :: city : chararray , address :: state : chararray , address :: zip : int ) }}

In fact, the GROUP and COGROUP operators are identical. Both operators work with one or more relations. For readability, GROUP is used in statements involving one relation while COGROUP is used when involving two or more relations.

A closely related but different operator is JOIN, which is a syntactic sugar of COGROUP followed by FLATTEN.

grunt > joined_by_name = JOIN persons BY name , flatten_persons BY name ;

grunt > DESCRIBE joined_by_name ;

joined_by_name : { persons :: name : chararray , persons :: age : int , persons :: address : ( street : chararray , city : chararray , state : chararray , zip : int ) , flatten_persons :: name : chararray , flatten_persons :: age : int , flatten_persons :: address :: street : chararray , flatten_persons :: address :: city : chararray , flatten_persons :: address :: state : chararray , flatten_persons :: address :: zip : int }

Overall, a Pig Latin program is like a handcrafted query execution plan. In contrast, a SQL based solution, e.g. Hive, relies on an execution planner to automatically translate SQL statements to an execution plan. Like SQL, Pig Latin has no control structures. But it is possible to embed Pig Latin statements and Pig commands in the Python, JavaScript and Groovy scripts.

When you run the above statements in the console of Pig, you will notice that they finish instantaneously. It is because Pig is lazy and there is no really computation happened. For example, LOAD does not really read the data but just returns a handle to a bag/relation. Only when a STORE command is issued, Pig materialize the result of a Pig Latin expression sequence to the file system. Before a STORE command, Pig just builds a logical plan for every user defined bag. At the point of a STORE command, the logical plan is compiled into a physical plan (a directed acyclic graph of MapReduce jobs) and is executed.

It is possible to replace MapReduce with other execution engines in Pig. For example, there are efforts to run Pig on top of Spark. However, is it necessary? Spark already provides many relational operators and the host language Scala is very nice to write concise and expressive programs.

In summary, Pig Latin is a simple and easy to use DSL that makes MapReduce programming a lot easier. Meanwhile, Pig keeps the flexibility of MapReduce to process schemaless data in plain files. There is no need to do slow and complex ETL tasks before analysis, which makes Pig a great tool for quick ad-hoc analytics such as web log analysis.

5.2 Hive

Although many statements in Pig Latin look just like SQL clauses, it is a procedural programming language. In this section we will discuss Apache Hive that first brought SQL to Hadoop. Similar to Pig, Hive translates its own dialect of SQL (HiveQL) queries to a directed acyclic graph of MapReduce (or Tez since 0.13) jobs. However, the difference between Pig and Hive is not only procedural vs declarative. Pig is a relatively thin layer on top of MapReduce for offline analytics. But Hive is towards a data warehouse. With the recent stinger initiative, Hive is closer to interactive analytics by 100x performance improvement.

Pig uses a “schema on read” approach that users define the (optional) schema on loading data. In contrast, Hive requires users to provides schema, (optional) storage format and serializer/deserializer (called SerDe) when creating a table. These information is saved in the metadata repository (by default an embedded Derby database) and will be used whenever the table is referenced, e.g. to typecheck the expressions in the query and to prune partitions based on query predicates. The metadata store also provides data discovery (e.g. SHOW TABLES and DESCRIBE) that enables users to discover and explore relevant and specific data in the warehouse. The following example shows how to create a database and a table.

CREATE DATABASE portal ;

USE portal ;

CREATE TABLE weblog (

host STRING ,

identity STRING ,

user STRING ,

time STRING ,

request STRING ,

status STRING ,

size STRING ,

referer STRING ,

agent STRING )

ROW FORMAT SERDE ' org . apache . hadoop . hive . serde2 . RegexSerDe '

WITH SERDEPROPERTIES (

" input . regex " = " ([^ ]⋆) ([^ ]⋆) ([^ ]⋆) (-|\\[[^\\]]⋆\\]) ([^ \ " ]⋆|\ " [^\ " ]⋆\ " ) (-|[0-9]⋆) (-|[0-9]⋆) (?: ([^ \ " ]⋆|\ " [^\ " ]⋆\ " ) ([^ \ " ]⋆|\ " [^\ " ]⋆\ " ) ) ? "

)

STORED AS TEXTFILE ;

The interesting part of example is the bottom five lines that specify custom regular expression SerDe and plain text file format. If ROW FORMAT is not specified or ROW FORMAT DELIMITED is specified, a native SerDe is used. Besides plain text files, many other file formats are supported. Later we will discuss more details on ORC files, which improve query performance significantly.

Different from relational data warehouses, Hive supports nested data models with complex types array, map, and struct. For example, the following statement creates a table with a complex schema.

CREATE TABLE complex_table (

id STRING ,

value FLOAT ,

list_of_maps ARRAY < MAP < STRING , STRUCT < x : INT , y : INT >>>

) ;

By default, all the data files for a table are located in a single directory. Tables can be physically partitioned based on values of one or more columns with the PARTITIONED BY clause. A separate directory is created for each distinct value combination in the partition columns. Partitioning can greatly speed up queries that test those columns. Note that the partitioning columns are not part of the table data and the partition column values are encoded in the directory path of that partition (and also stored in the metadata store). Moreover, tables or partitions can be bucketed using CLUSTERED BY columns, and data can be sorted within that bucket via SORT BY columns.

Now we can load some data into our table:

LOAD DATA LOCAL INPATH ' portal / logs ' OVERWRITE INTO TABLE weblog ;

Note that Hive does not do any verification of data against the schema or transformation while loading data into tables. The input files are simply copied or moved into the Hive’s file system namespace. If the keyword LOCAL is specified, the input files are assumed in the local file system, otherwise in HDFS. While not necessary in this example, the keyword OVERWRITE signifies that existing data in the table is overwritten. If the OVERWRITE keyword is omitted, data files are appended to existing data sets.

Tables can also be created and populated by the results of a query in a create-table-as-select (CTAS) statement that includes two parts. The SELECT part can be any SELECT statement supported by HiveQL. The CREATE part of the CTAS takes the resulting schema from the SELECT part and creates the target table with other table properties such as the SerDe and storage format.

CREATE TABLE orc_weblog

STORED AS ORC

AS

SELECT ⋆ FROM weblog ;

Similarly, query results can be inserted into tables by the INSERT clause. INSERT OVERWRITE will overwrite any existing data in the table or partition while INSERT INTO will append to the table or partition. Multiple insert clauses can be specified in the same query, which minimize the number of data scans required.

Hive does not support the OLTP-style INSERT INTO that inserts a new record. HiveQL does not have UPDATE and DELETE clauses either. This is actually a good design choice as these clauses are not necessary for data warehouses. Without them, Hive can use very simple mechanisms to deal with reader and writer concurrency.

For queries, HiveQL is pretty much like what you see in SQL. Besides common SQL features (e.g. JOIN, WHERE, HAVING, GROUP BY, SORT BY, ...), HiveQL also have extensions such as TABLESAMPLE, LATERAL VIEW, OVER, etc. We will not dive into the syntax of query statements. Instead, we will discuss the stinger initiative, which improves the query performance significantly.

A big contribution of stinger initiative is the Optimized Record Columnar (ORC) file. In previous example, we use TEXTFILE in which each line/row contains a record. In fact, most relational and document databases employ such a row-oriented storage format. However, column-oriented file format has advantages for data warehouses where aggregates are computed over large numbers of data items. For example, only required column values on each query are scanned and transferred on query execution. Besides, column data is of uniform type and thus may achieve better compression, especially if the cardinality of the column is low. Before ORC files, Hive already had a columnar file format RCFile. However, RCFile is data-type-agnostic and its corresponding SerDe serializes a single row at a time. In ORC Files, the SerDe is de-emphasized and the ORC file writer is data type aware. So the ORC file can decompose a complex column to multiple child columns and various type-specific data encoding schemes can be applied to primitive data streams to store data efficiently. Besides, the ORC file also supports indexes. Well, these indexes are not B-trees but basically data statistics and position pointers. The data statistics are used in query optimization and to answer simple aggregation queries. They are also helpful to avoid unnecessary data read. The position pointers are used to locate the index groups and stripes.

The stinger initiative also put a lot of efforts to improve the query planning and execution. For example, unnecessary Map-only jobs are eliminated. In Hive, a Map-only job is generated when the query planner converts a Reduce Join to a Map Join. Now, Hive tries to merge the generated Map-only job to its child job if the total size of small tables used to build hash tables in the merged job is under a configurable threshold. Besides, a correlation optimizer was developed to avoid unnecessary data loading and repartitioning so that Hive loads the common table only once instead of multiple times and the optimized plan will have less number of shuffling phases.

Besides MapReduce, Hive now embeds Apache Tez as an execution engine. Compared to MapReduce’s simple scatter/gather model, Tez offers a customizable execution architecture that models complex computations as dataflow graphs with dynamic performance optimizations. With Tez, Hive can translate complex SQL statements into efficient physical plans. For example, several reduce sinks can be linked directly in Tez and data can be pipelined without the need of temporary HDFS files. This pattern is referred to as MRR (Map - reduce - reduce*). Join is also much easier in Tez because a Tez task may take multiple bipartite edges as input thus exposing the input relations directly to the join implementation. The shuffle join task taking multiple feeds is called multi-parent shuffle join (MPJ). Both MRR and MPJ are employed in Hive to speed up a wide variety of queries.

Another potential benefit of Tez is to avoid unnecessary disk writes. In MapReduce, map outputs are partitioned, sorted and written to disk, then pulled, merge-sorted and fed into the reducers. Tez allows for small datasets to be handled entirely in memory. This is attractive as many analytic queries generate small intermediate datasets after the heavy lifting. Moreover, Tez allows complete control over the processing, e.g. stopping processing when limits are met. Unfortunately, these feature are not used in Hive currently.

There is also work to employ Spark as the third execution engine in Hive, called Hive on Spark. Hive on Spark is still in early stage and it is not designed to replace Tez or MapReduce as each has different strengths depending on the use case. Shark and Spark SQL are similar attempts. We will discuss them in details later.

Finally, let’s briefly talk about the vectorized query execution. But first to note that “vectorized” does not mean using vector computing facility such as SSE/AVX or CUDA. Instead, it aims to improve the runtime execution efficiency by taking advantage of the characteristics of modern CPUs. The one-row-at-a-time model of MapReduce is not friendly to modern CPUs that heavily relay on pipelines, superscalar (multiple issue), and cache. In the vectorized execution model, data are processed in batches of rows through the operator tree, whose expressions work on column vectors and produce output in column vectors. The vectorized expressions are carefully designed to minimize branching and function calls. A good compiler may also unroll the tight loops of vectorized expressions to effectively make use of the superscalar pipelines without delays. Furthermore, the size of batch is configurable and should be chosen to fit the entire batch in the processor cache. Experiments show that the vectorized execution engine is 3 - 5 times faster than the original Hive execution engine.

Invented by Facebook, Hive has been battle proven. Today, Facebook’s Hive data warehouse holds 300 PB data with an incoming daily rate of about 600 TB! The stinger initiative makes Hive more suitable for interactive analytics although the speed is not the fastest in the market.

5.3 Impala

After Apache Hive first brought SQL to Hadoop, several new SQL on Hadoop solutions were introduced. In particular, there are Cloudera Impala and Apache Drill, which run on Hadoop and both are inspired by Google Dremel that was designed for interactive analysis of web-scale datasets . In a nutshell, they are native massively parallel processing query engine on read-only data. Impala is implemented in C++ while Drill is implemented in Java. Both Impala and Drill can query Hive tables directly. Impala actually uses Hive’s metastore.

Hive is basically a front end to parse SQL statements, generate and optimize logical plans, translate them into physical plans that are finally executed by a backend such as MapReduce or Tez. Dremel and its derivatives are different as they execute queries natively without translating them into MapReduce jobs. For example, the core Impala component is a daemon process that runs on each node of the cluster as the query planner, coordinator, and execution engine. Each node can accept queries. The planner turns a request into collections of parallel plan fragments. The coordinator initiates execution on remote nodes in the cluster. The execution engine reads and writes to data files, and transmits intermediate query results back to the coordinator node.

The two core technologies of Dremel are columnar storage for nested data and the tree architecture for query execution:

Columnar Storage Data is stored in a columnar storage fashion to achieve very high compression ratio and scan throughput. Tree Architecture The architecture forms a massively parallel distributed multi-level serving tree for pushing down a query to the tree and then aggregating the results from the leaves.

These are good ideas and have been adopted by other systems. For example, Hive 0.13 has the ORC file for columnar storage and can use Tez as the execution engine that structures the computation as a directed acyclic graph. Both (and other innovations) help a lot to improve the performance of Hive. However, the recent benchmark from Cloudera (the vendor of Impala) [28] and the benchmark by AMPLab [4] show that Impala still has the performance lead over Hive. It is well known that benchmarks are often biased due to the hardware setting, software tweaks, queries in testing, etc. But it is still meaningful to find out what possible design choice and implementation details cause this performance difference. And it may help both communities improve the offerings in the future. What follows is a list of possible reasons:

As a native query engine, Impala avoids the startup overhead of MapReduce/Tez jobs. It is well known that MapReduce programs take some time before all nodes are running at full capacity. In Hive, every query suffers this “cold start” problem. In contrast, Impala daemon processes are started at boot time, and thus are always ready to execute a query.

Hadoop reuses JVM instances to reduce the startup overhead partially. However, it also introduces another problem. For big data processing, we prefer large memory. For example, the recommended physical memory for an Impala node is 128 GB or higher. The nodes in the aforementioned benchmark have 384 GB memory. Such a big heap is actually a big challenge to the garbage collection system of the reused JVM instances. The stop-of-the-world GC pauses may add high latency to queries. Impala’s execution engine, written in C++ native code, avoids this problem. Impala may also do a better job on cache management.

Impala process are multithreaded. Importantly, the scanning portion of plan fragments are multithreaded as well as making use of SSE4.2 instructions. The I/O and network systems are also highly multithreaded. Therefore, each single Impala node runs more efficiently by a high level local parallelism.

Impala’s query execution is pipelined as much as possible. In case of aggregation, the coordinator starts the final aggregation as soon as the pre-aggregation fragments has started to return results. In contrast, sort and reduce can only start once all the mappers are done in MapReduce. Tez currently does not support pipelined execution yet.

MapReduce materializes all intermediate results while Impala streams intermediate results between executors. Tez allows different types of Input/Output including file, TCP, etc. But it seems that Hive does not use this feature yet to avoid unnecessary disk writes.

The reducer of MapReduce employs a pull model to get Map output partitions. For sorted output, Tez makes use of the MapReduce ShuffleHandler, which requires downstream Inputs to pull data over HTTP. With multiple reducers (or downstream Inputs) running simultaneously, it is highly likely that some of them will attempt to read from the same map node at the same time, inducing a large number of disk seeks and slowing the effective disk transfer rate. Hive’s query expressions are generated at compile time while Impala does runtime code generation for “big loops” using LLVM that can achieve more optimized code.

Tez allows complete co