Module 01 - Hadoop Installation and Setup

1.1 The architecture of Hadoop cluster

1.2 What is High Availability and Federation?

1.3 How to setup a production cluster?

1.4 Various shell commands in Hadoop

1.5 Understanding configuration files in Hadoop

1.6 Installing a single node cluster with Cloudera Manager

1.7 Understanding Spark, Scala, Sqoop, Pig, and Flume

Module 02 - Introduction to Big Data Hadoop and Understanding HDFS and MapReduce

2.1 Introducing Big Data and Hadoop

2.2 What is Big Data and where does Hadoop fit in?

2.3 Two important Hadoop ecosystem components, namely, MapReduce and HDFS

2.4 In-depth Hadoop Distributed File System – Replications, Block Size, Secondary Name node, High Availability and in-depth YARN – resource manager and node manager

Hands-on Exercise:

1. HDFS working mechanism

2. Data replication process

3. How to determine the size of the block?

4. Understanding a data node and name node

Module 03 - Deep Dive in MapReduce

3.1 Learning the working mechanism of MapReduce

3.2 Understanding the mapping and reducing stages in MR

3.3 Various terminologies in MR like Input Format, Output Format, Partitioners, Combiners, Shuffle, and Sort

Hands-on Exercise:

1. How to write a WordCount program in MapReduce?

2. How to write a Custom Partitioner?

3. What is a MapReduce Combiner?

4. How to run a job in a local job runner

5. Deploying a unit test

6. What is a map side join and reduce side join?

7. What is a tool runner?

8. How to use counters, dataset joining with map side, and reduce side joins?

Module 04 - Introduction to Hive

4.1 Introducing Hadoop Hive

4.2 Detailed architecture of Hive

4.3 Comparing Hive with Pig and RDBMS

4.4 Working with Hive Query Language

4.5 Creation of a database, table, group by and other clauses

4.6 Various types of Hive tables, HCatalog

4.7 Storing the Hive Results, Hive partitioning, and Buckets

Hands-on Exercise:

1. Database creation in Hive

2. Dropping a database

3. Hive table creation

4. How to change the database?

5. Data loading

6. Dropping and altering table

7. Pulling data by writing Hive queries with filter conditions

8. Table partitioning in Hive

9. What is a group by clause?

Module 05 - Advanced Hive and Impala

5.1 Indexing in Hive

5.2 The ap Side Join in Hive

5.3 Working with complex data types

5.4 The Hive user-defined functions

5.5 Introduction to Impala

5.6 Comparing Hive with Impala

5.7 The detailed architecture of Impala

Hands-on Exercise:

1. How to work with Hive queries?

2. The process of joining the table and writing indexes

3. External table and sequence table deployment

4. Data storage in a different table

Module 06 - Introduction to Pig

6.1 Apache Pig introduction and its various features

6.2 Various data types and schema in Hive

6.3 The available functions in Pig, Hive Bags, Tuples, and Fields

Hands-on Exercise:

1. Working with Pig in MapReduce and local mode

2. Loading of data

3. Limiting data to 4 rows

4. Storing the data into files and working with Group By, Filter By, Distinct, Cross, Split in Hive

Module 07 - Flume, Sqoop and HBase

7.1 Apache Sqoop introduction

7.2 Importing and exporting data

7.3 Performance improvement with Sqoop

7.4 Sqoop limitations

7.5 Introduction to Flume and understanding the architecture of Flume

7.6 What is HBase and the CAP theorem?

Hands-on Exercise:

1. Working with Flume to generate Sequence Number and consume it

2. Using the Flume Agent to consume the Twitter data

3. Using AVRO to create Hive Table

4. AVRO with Pig

5. Creating Table in HBase

6. Deploying Disable, Scan, and Enable Table

Module 08 - Writing Spark Applications Using Scala

8.1 Using Scala for writing Apache Spark applications

8.2 Detailed study of Scala

8.3 The need for Scala

8.4 The concept of object-oriented programming

8.5 Executing the Scala code

8.6 Various classes in Scala like getters, setters, constructors, abstract, extending objects, overriding methods

8.7 The Java and Scala interoperability

8.8 The concept of functional programming and anonymous functions

8.9 Bobsrockets package and comparing the mutable and immutable collections

8.10 Scala REPL, Lazy Values, Control Structures in Scala, Directed Acyclic Graph (DAG), first Spark application using SBT/Eclipse, Spark Web UI, Spark in Hadoop ecosystem.

Hands-on Exercise:

1. Writing Spark application using Scala

2. Understanding the robustness of Scala for Spark real-time analytics operation

Module 09 - Spark framework

9.1 Detailed Apache Spark and its various features

9.2 Comparing with Hadoop

9.3 Various Spark components

9.4 Combining HDFS with Spark and Scalding

9.5 Introduction to Scala

9.6 Importance of Scala and RDD

Hands-on Exercise:

1. The Resilient Distributed Dataset (RDD) in Spark

2. How does it help to speed up Big Data processing?

Module 10 - RDD in Spark

10.1 Understanding the Spark RDD operations

10.2 Comparison of Spark with MapReduce

10.3 What is a Spark transformation?

10.4 Loading data in Spark

10.5 Types of RDD operations viz. transformation and action

10.6 What is a Key/Value pair?

Hands-on Exercise:

1. How to deploy RDD with HDFS?

2. Using the in-memory dataset

3. Using file for RDD

4. How to define the base RDD from an external file?

5. Deploying RDD via transformation

6. Using the Map and Reduce functions

7. Working on word count and count log severity

Module 11 - Data Frames and Spark SQL

11.1 The detailed Spark SQL

11.2 The significance of SQL in Spark for working with structured data processing

11.3 Spark SQL JSON support

11.4 Working with XML data and parquet files

11.5 Creating Hive Context

11.6 Writing Data Frame to Hive

11.7 How to read a JDBC file?

11.8 Significance of a Spark data frame

11.9 How to create a data frame?

11.10 What is schema manual inferring?

11.11 Work with CSV files, JDBC table reading, data conversion from Data Frame to JDBC, Spark SQL user-defined functions, shared variable, and accumulators

11.12 How to query and transform data in Data Frames?

11.13 How data frame provides the benefits of both Spark RDD and Spark SQL?

11.14 Deploying Hive on Spark as the execution engine

Hands-on Exercise:

1. Data querying and transformation using Data Frames

2. Finding out the benefits of Data Frames over Spark SQL and Spark RDD

Module 12 - Machine Learning Using Spark (MLlib)

12.1 Introduction to Spark MLlib

12.2 Understanding various algorithms

12.3 What is Spark iterative algorithm?

12.4 Spark graph processing analysis

12.5 Introducing Machine Learning

12.6 K-Means clustering

12.7 Spark variables like shared and broadcast variables

12.8 What are accumulators?

12.9 Various ML algorithms supported by MLlib

12.10 Linear regression, logistic regression, decision tree, random forest, and K-means clustering techniques

Hands-on Exercise:

1. Building a recommendation engine

Module 13 - Integrating Apache Flume and Apache Kafka

13.1 Why Kafka?

13.2 What is Kafka?

13.3 Kafka architecture

13.4 Kafka workflow

13.5 Configuring Kafka cluster

13.6 Basic operations

13.7 Kafka monitoring tools

13.8 Integrating Apache Flume and Apache Kafka

Hands-on Exercise:

1. Configuring Single Node Single Broker Cluster

2. Configuring Single Node Multi Broker Cluster

3. Producing and consuming messages

4. Integrating Apache Flume and Apache Kafka.

Module 14 - Spark Streaming

14.1 Introduction to Spark streaming

14.2 The architecture of Spark streaming

14.3 Working with the Spark streaming program

14.4 Processing data using Spark streaming

14.5 Requesting count and DStream

14.6 Multi-batch and sliding window operations

14.7 Working with advanced data sources

14.8 Features of Spark streaming

14.9 Spark Streaming workflow

14.10 Initializing StreamingContext

14.11 Discretized Streams (DStreams)

14.12 Input DStreams and Receivers

14.13 Transformations on DStreams

14.14 Output Operations on DStreams

14.15 Windowed operators and its uses

14.16 Important Windowed operators and Stateful operators

Hands-on Exercise:

1. Twitter Sentiment analysis

2. Streaming using Netcat server

3. Kafka-Spark streaming

4. Spark-Flume streaming

Module 15 - Hadoop Administration – Multi-node Cluster Setup Using Amazon EC2

15.1 Create a 4-node Hadoop cluster setup

15.2 Running the MapReduce Jobs on the Hadoop cluster

15.3 Successfully running the MapReduce code

15.4 Working with the Cloudera Manager setup

Hands-on Exercise:

1. The method to build a multi-node Hadoop cluster using an Amazon EC2 instance

2. Working with the Cloudera Manager

Module 16 - Hadoop Administration – Cluster Configuration

16.1 Overview of Hadoop configuration

16.2 The importance of Hadoop configuration file

16.3 The various parameters and values of configuration

16.4 The HDFS parameters and MapReduce parameters

16.5 Setting up the Hadoop environment

16.6 The Include and Exclude configuration files

16.7 The administration and maintenance of name node, data node directory structures, and files

16.8 What is a File system image?

16.9 Understanding Edit log

Hands-on Exercise:

1. The process of performance tuning in MapReduce

Module 17 - Hadoop Administration – Maintenance, Monitoring and Troubleshooting

17.1 Introduction to the checkpoint procedure, name node failure

17.2 How to ensure the recovery procedure, Safe Mode, Metadata and Data backup, various potential problems and solutions, what to look for and how to add and remove nodes

Hands-on Exercise:

1. How to go about ensuring the MapReduce File System Recovery for different scenarios

2. JMX monitoring of the Hadoop cluster

3. How to use the logs and stack traces for monitoring and troubleshooting

4. Using the Job Scheduler for scheduling jobs in the same cluster

5. Getting the MapReduce job submission flow

6. FIFO schedule

7. Getting to know the Fair Scheduler and its configuration

Module 18 - ETL Connectivity with Hadoop Ecosystem (Self-Paced)

18.1 How ETL tools work in Big Data industry?

18.2 Introduction to ETL and data warehousing

18.3 Working with prominent use cases of Big Data in ETL industry

18.4 End-to-end ETL PoC showing Big Data integration with ETL tool

Hands-on Exercise:

1. Connecting to HDFS from ETL tool

2. Moving data from Local system to HDFS

3. Moving data from DBMS to HDFS,

4. Working with Hive with ETL Tool

5. Creating MapReduce job in ETL tool

Module 19 - Project Solution Discussion and Cloudera Certification Tips and Tricks

19.1 Working towards the solution of the Hadoop project solution

19.2 Its problem statements and the possible solution outcomes

19.3 Preparing for the Cloudera certifications

19.4 Points to focus on scoring the highest marks

19.5 Tips for cracking Hadoop interview questions

Hands-on Exercise:

1. The project of a real-world high value Big Data Hadoop application

2. Getting the right solution based on the criteria set by the Intellipaat team

Following topics will be available only in self-paced mode:

Module 20 - Hadoop Application Testing

20.1 Importance of testing

20.2 Unit testing, Integration testing, Performance testing, Diagnostics, Nightly QA test, Benchmark and end-to-end tests, Functional testing, Release certification testing, Security testing, Scalability testing, Commissioning and Decommissioning of data nodes testing, Reliability testing, and Release testing

Module 21 - Roles and Responsibilities of Hadoop Testing Professional

21.1 Understanding the Requirement

21.2 Preparation of the Testing Estimation

21.3 Test Cases, Test Data, Test Bed Creation, Test Execution, Defect Reporting, Defect Retest, Daily Status report delivery, Test completion, ETL testing at every stage (HDFS, Hive and HBase) while loading the input (logs, files, records, etc.) using Sqoop/Flume which includes but not limited to data verification, Reconciliation, User Authorization and Authentication testing (Groups, Users, Privileges, etc.), reporting defects to the development team or manager and driving them to closure

21.4 Consolidating all the defects and create defect reports

21.5 Validating new feature and issues in Core Hadoop

Module 22 - Framework Called MRUnit for Testing of MapReduce Programs

22.1 Report defects to the development team or manager and driving them to closure

22.2 Consolidate all the defects and create defect reports

22.3 Responsible for creating a testing framework called MRUnit for testing of MapReduce programs

Module 23 - Unit Testing

23.1 Automation testing using the OOZIE

23.2 Data validation using the query surge tool

Module 24 - Test Execution

24.1 Test plan for HDFS upgrade

24.2 Test automation and result

Module 25 - Test Plan Strategy and Writing Test Cases for Testing Hadoop Application

25.1 Test, install and configure

What Hadoop Projects You Will Be Working on?

Project 01: Working with MapReduce, Hive and Sqoop

Industry: General

Problem Statement: How to successfully import data using Sqoop into HDFS for data analysis

Topics: As part of this project, you will work on the various Hadoop components like MapReduce, Apache Hive and Apache Sqoop. You will have to work with Sqoop to import data from relational database management system like MySQL data into HDFS. You need to deploy Hive for summarizing data, querying and analysis. You have to convert SQL queries using HiveQL for deploying MapReduce on the transferred data. You will gain considerable proficiency in Hive and Sqoop after the completion of this project.

Highlights:

1.1 Sqoop data transfer from RDBMS to Hadoop

1.2 Coding in Hive Query Language

1.3 Data querying and analysis

Project 02: Work on MovieLens data for finding the top movies

Industry: Media and Entertainment

Problem Statement: How to create the top-ten-movies list using the MovieLens data

Topics: In this project you will work exclusively on data collected through MovieLens available rating data sets. The project involves writing MapReduce program to analyze the MovieLens data and creating the list of top ten movies. You will also work with Apache Pig and Apache Hive for working with distributed datasets and analyzing it.

Highlights:

2.1 MapReduce program for working on the data file

2.2 Apache Pig for analyzing data

2.3 Apache Hive data warehousing and querying

Project 03: Hadoop YARN Project; End-to-end PoC

Industry: Banking

Problem Statement: How to bring the daily data (incremental data) into the Hadoop Distributed File System

Topics: In this project, we have transaction data which is daily recorded/stored in the RDBMS. Now this data is transferred everyday into HDFS for further Big Data Analytics. You will work on live Hadoop YARN cluster. YARN is part of the Hadoop ecosystem that lets Hadoop to decouple from MapReduce and deploy more competitive processing and wider array of applications. You will work on the YARN central resource manager.

Highlights:

3.1 Using Sqoop commands to bring the data into HDFS

3.2 End-to-end flow of transaction data

3.3 Working with the data from HDFS

Project 04: Table Partitioning in Hive

Industry: Banking

Problem Statement: How to improve the query speed using Hive data partitioning

Topics: This project involves working with Hive table data partitioning. Ensuring the right partitioning helps to read the data, deploy it on the HDFS and run the MapReduce jobs at a much faster rate. Hive lets you partition data in multiple ways. This will give you hands-on experience in partitioning of Hive tables manually, deploying single SQL execution in dynamic partitioning and bucketing of data so as to break it into manageable chunks.

Highlights:

4.1 Manual Partitioning

4.2 Dynamic Partitioning

4.3 Bucketing

Project 05: Connecting Pentaho with Hadoop Ecosystem

Industry: Social Network

Problem Statement: How to deploy ETL for data analysis activities

Topics: This project lets you connect Pentaho with the Hadoop ecosystem. Pentaho works well with HDFS, HBase, Oozie and ZooKeeper. You will connect the Hadoop cluster with Pentaho data integration, analytics, Pentaho server and report designer. This project will give you complete working knowledge on the Pentaho ETL tool.

Highlights:

5.1 Working knowledge of ETL and Business Intelligence

5.2 Configuring Pentaho to work with Hadoop distribution

5.3 Loading, transforming and extracting data into Hadoop cluster

Project 06: Multi-node Cluster Setup

Industry: General

Problem Statement: How to setup a Hadoop real-time cluster on Amazon EC2

Topics: This is a project that gives you opportunity to work on real world Hadoop multi-node cluster setup in a distributed environment. You will get a complete demonstration of working with various Hadoop cluster master and slave nodes, installing Java as a prerequisite for running Hadoop, installation of Hadoop and mapping the nodes in the Hadoop cluster.

Highlights:

6.1 Hadoop installation and configuration

6.2 Running a Hadoop multi-node using a 4-node cluster on Amazon EC2

6.3 Deploying of MapReduce job on the Hadoop cluster

Project 07: Hadoop Testing Using MRUnit

Industry: General

Problem Statement: How to test MapReduce applications

Topics: In this project, you will gain proficiency in Hadoop MapReduce code testing using MRUnit. You will learn about real-world scenarios of deploying MRUnit, Mockito and PowerMock. This will give you hands-on experience in various testing tools for Hadoop MapReduce. After completion of this project you will be well-versed in test-driven development and will be able to write light-weight test units that work specifically on the Hadoop architecture.

Highlights:

7.1 Writing JUnit tests using MRUnit for MapReduce applications

7.2 Doing mock static methods using PowerMock and Mockito

7.3 MapReduce Driver for testing the map and reduce pair

Project 08: Hadoop Web Log Analytics

Industry: Internet Services

Problem Statement: How to derive insights from web log data

Topics: This project is involved with making sense of all the web log data in order to derive valuable insights from it. You will work with loading the server data onto a Hadoop cluster using various techniques. The web log data can include various URLs visited, cookie data, user demographics, location, date and time of web service access, etc. In this project, you will transport the data using Apache Flume or Kafka, workflow and data cleansing using MapReduce, Pig or Spark. The insight thus derived can be used for analyzing customer behavior and predict buying patterns.

Highlights:

8.1 Aggregation of log data

8.2 Apache Flume for data transportation

8.3 Processing of data and generating analytics

Project 09: Hadoop Maintenance

Industry: General

Problem Statement: How to administer a Hadoop cluster

Topics: This project is involved with working on the Hadoop cluster for maintaining and managing it. You will work on a number of important tasks that include recovering of data, recovering from failure, adding and removing of machines from the Hadoop cluster and onboarding of users on Hadoop.

Highlights:

9.1 Working with name node directory structure

9.2 Audit logging, data node block scanner and balancer

9.3 Failover, fencing, DISTCP and Hadoop file formats

Project 10: Twitter Sentiment Analysis

Industry: Social Media

Problem Statement: Find out what is the reaction of the people to the demonetization move by India by analyzing their tweets

Topics: This Project involves analyzing the tweets of people by going through what they are saying about the demonetization decision taken by the Indian government. Then you look for key phrases and words and analyze them using the dictionary and the value attributed to them based on the sentiment that they are conveying.

Highlights:

10.1 Download the tweets and load into Pig storage

10.2 Divide tweets into words to calculate sentiment

10.3 Rating the words from +5 to −5 on AFFIN dictionary

10.4 Filtering the tweets and analyzing sentiment

Project 11: Analyzing IPL T20 Cricket

Industry: Sports and Entertainment

Problem Statement: Analyze the entire cricket match and get answers to any question regarding the details of the match

Topics: This project involves working with the IPL dataset that has information regarding batting, bowling, runs scored, wickets taken and more. This dataset is taken as input, and then it is processed so that the entire match can be analyzed based on the user queries or needs.

Highlights:

11.1 Load the data into HDFS

11.2 Analyze the data using Apache Pig or Hive

11.3 Based on user queries give the right output

Apache Spark Projects

Project 01: Movie Recommendation

Industry: Entertainment

Problem Statement: How to recommend the most appropriate movie to a user based on his taste

Topics: This is a hands-on Apache Spark project deployed for the real-world application of movie recommendations. This project helps you gain essential knowledge in Spark MLlib which is a Machine Learning library; you will know how to create collaborative filtering, regression, clustering and dimensionality reduction using Spark MLlib. Upon finishing the project, you will have first-hand experience in the Apache Spark streaming data analysis, sampling, testing and statistics, among other vital skills.

Highlights:

1.1 Apache Spark MLlib component

1.2 Statistical analysis

1.3 Regression and clustering

Project 02: Twitter API Integration for Tweet Analysis

Industry: Social Media

Problem Statement: Analyzing the user sentiment based on the tweet

Topics: This is a hands-on Twitter analysis project using the Twitter API for analyzing of tweets. You will integrate the Twitter API and do programming using Python or PHP for developing the essential server-side codes. Finally, you will be able to read the results for various operations by filtering, parsing and aggregating it depending on the tweet analysis requirement.

Highlights:

2.1 Making requests to Twitter API

2.2 Building the server-side codes

2.3 Filtering, parsing and aggregating data

Project 03: Data Exploration Using Spark SQL – Wikipedia Data Set

Industry: Internet

Problem Statement: Making sense of Wikipedia data using Spark SQL

Topics: In this project you will be using the Spark SQL tool for analyzing the Wikipedia data. You will gain hands-on experience in integrating Spark SQL for various applications like batch analysis, Machine Learning, visualizing and processing of data and ETL processes, along with real-time analysis of data.

Highlights:

3.1 Machine Learning using Spark

3.2 Deploying data visualization

3.3 Spark SQL integration