Contact:+91-9158698731
Hadoop Cluster

Hadoop Training in Pune online

Hadoop Training in Pune online

Mode of training:- Webx session (online) or Classroom (offline)
(Every students will get 6 Nodes Hadoop Cluster for practice, so that you feel like real time experience)

Hadoop Training Syllabus
Duration:- 45 hrs training

Background Concepts
. Understand the problem Hadoop Solves
. Understand the Hadoop version 1 approach
. Understand the Hadoop version 2 approach
. Understand the Hadoop project

Running Hadoop on a desktop or Laptop
. Install Cloudera
. Install from apache Hadoop sources

Setting up Hadoop on a Local Cluster
. Specify and prepare servers
. Install and configure Hadoop Core
. Install and configure Pig and Hive
. Perform simple administration and monitoring
. Install and configure Hadoop
. Perform simple administration and monitoring

The Hadoop Distributed File system
. Understand HDFC basics
. Use HDFS tools and do administration
. Use HDFS in programs
. Learn additional features of HDFS

YARN and Hadoop MapReduce
. Understand YARN
. Resource Manager and Application Manager
. Understand the MapReduce paradigm
. Develop and run a Java MapReduce application
–java MapReduce program with real time data
. Understand how MapReduce works

Hadoop MapReduce Examples
. Using the streaming interface
–python programing
. Using the pipes interface
. Running the Hadoop grep examples
. Debug MapReduce
. Understand Hadoop version 2 MapReduce
. Use Hadoop version 2 features

Zookeeper
. Architect of Zookeeper
. How zookeeper cluster works in Hadoop ecosystem

Pig and ETL
. Architecture of pig
. Ping Latin Basic
. Types of use case we can use Pig.
. Tight coupling between Pig and MapReduce
. Pig Latin scripting
. PIG running modes
. PIG UDF
. Pig Streaming
. Pig Commands – Load, Store, Describe, Dump
. Processing structured and un-structured data in pig
. Testing PIG Scripts
. Pig Latin Built-in Functions example
. Complex use of pig
. Real time pig cases
. Practice on data set
. Exploring ping tools in the Cloudera distribution.

Hive
. Hive Architect and Hive Cluster Deployment
. Metastore in Hive
. Limitations of Hive, Comparison with Traditional Database
. Hive Data Types and Data Models
. Partitions and Buckets.
. Hive Tables (Managed Tables and External Tables)
. Importing Data, Querying Data, Managing Outputs
. Hive Script, Hive UDF
. Hive QL: Joining Tables, Dynamic Partitioning, Custom Map/Reduce Scripts, Hive Indexes and views Hive query optimizers
. Hive: Thrift Server, User Defined Functions (UDF)
. Practice demo on Data Set
. Exploring Hive tools in the Cloudera distribution and the hue web console

Hbase
. HBase: Introduction to NoSQL Databases and HBase
. HBase v/s RDBMS, HBase Components
. HBase Architecture, Run Modes & Configuration, HBase Cluster Deployment
. HBase Data Model
. HBase Shell, HBase Client API
. Data Loading Techniques
. ZooKeeper Data Model, Zookeeper Service, Zookeeper
. Practical on Bulk Loading, Getting and Inserting Data, Filters in HBase

Hue
. Overview of hue
. Practical use of hue

Flume
. Architecture of Flume
. Setting up an agent
. Executing commands
. Flume channels

Oozie
. Introduction Oozie
. Oozie Components
. Building a workflow with Oozie
. Scheduling with Oozie
. Oozie coordinator, Oozie Commands, Oozie Web Console, Oozie for MapReduce, PIG, Hive, and Sqoop
. Combine flow of MR, PIG, and Hive in Oozie

Mahout
. Architect of Mahout

Sqoop
. Sqoop concept
. Connecting to a database servers
. Connecting to a mainframe
. Importing data into Hive and Hbase
. Real time example on Apache Sqoop

Spark and Scala
. Overview of scala
. Spark Ecosystem
. Spark Components
. History of Spark and Spark Versions/Releases
. Real time example of scala with spark
. Advanced spark programming with scala
. Practical on running application on Spark Cluster
. Comparing performance of MapReduce and Spark

Setting Up Hadoop in the Cloud (AWS)
. Hadoop Cluster setup in Cloud

1. This is completely practical training,
2. We covers 90% practice,
3. We provide 9 real time project as we have real time data.
4. Provide real time data set of each topic for practice
3. Provide extra classes for java and SQL if needed
5. Every student will get 6 Nodes Cluster for practice, so that you feel like real time experience.
6. These are topics will be covered, Namenode, standby node, Yarn, Resource Manager, MapReduce, DHW, BI, ETL, pig, Hive, NoSQL, Hbase, zookeeper, sqoop, flume, Oozie, Hue, Spark, Scala, Mahout, Basic, JAVA, SQL

Registration Process: We invite you to attend demo session of Hadoop expert and enroll for Hadoop program after satisfaction
Placement: 100% placement assistant, we have tie up with consultancy, who provides Hadoop profession to multinational company, Call +91-9158698731 for demo.

Q. Who can learn Hadoop?

Hadoop is suitable for all IT professionals who look forward to become Data Scientist / Data Analyst/ Developer/ Hadoop Administrator in future and become industry experts on the same. This course can be pursued by Java as well as non- Java background professionals (including Mainframe, DWH etc.)
A research report estimates that the IT market for big data in India is hovering around $1.15 billion as 2015 comes to an end. This contributed to one fifth of India’s KPO market worth $5.6 billion. Also, The Hindu predicts that by end of 2018, India alone will face a shortage of close to two lakh Data Scientists. This presents a tremendous career and growth opportunity.
If you belong to any of the following groups, knowledge of Big Data and Hadoop is crucial for you if you want to progress in your career:
1. Analytics professionals
2. BI /ETL/DW professionals
3. Project managers
4. Testing professionals
5. Mainframe professionals
6. System Admin
7. Developer
8. Any IT or Non IT person wants to make career in Hadoop.

Q. What are the pre-requisites for the Hadoop Course or if I don’t know java can I learn Hadoop?

As such, there are no pre-requisites for learning Hadoop. While very basic knowledge of Core Java and SQL will help, it is not required for learning Hadoop. However,  what are the essential part of Java skills is required in it, that we will cover here.

Q. Why Hadoop?

Hadoop cluster solve the Big Data Problem
it is Open Source Technology
open source data warehouse (DWH)
Fast processing
Distributed storage
distributed and parallel processing
Project Details:-
1) project on Social Media
2) Project on Retail Industry
3) Project on Tourism Industry
4) Project on Aviation Industry
5) Project on Banking and Finance Industry
6) Project on Media industry
7) Project on YouTube data
8) Project on Twitter
9) Project on Insurance sector

We assure you that about the quality. You attend our Hadoop Training and you will never forget us. We provide personal attention to every students to upskill or guide him to right direction. You can call us for career guidance rather then just for training, definitely we will guide.

Hadoop training in Pune

for demo/career guidance call +91-9158698731

Copyright © 2016 SkyBird Technology All rights reserved