The Big Data Analytics course offers a deep dive into the technologies, tools, and techniques used to process and analyze large-scale data. Learners will explore the Hadoop and Spark ecosystems, gaining hands-on experience with essential components such as Hadoop Distributed File System (HDFS), MapReduce, Pig, and Hive. The course also covers both relational (SQL) and nonrelational (NoSQL) databases, helping learners understand the appropriate contexts for each type of data storage.

Big Data Analytics

Recommended experience
Recommended experience
Beginner level
Experience in predictive analytics, working Python knowledge, and basic SQL familiarity General understanding of data analysis
Recommended experience
Recommended experience
Beginner level
Experience in predictive analytics, working Python knowledge, and basic SQL familiarity General understanding of data analysis
What you'll learn
Gain a deep understanding of Hadoop and Spark ecosystems for managing big data. Become familiar with tools like Hive and Pig to query large datasets.
Details to know

Add to your LinkedIn profile
16 assignments
See how employees at top companies are mastering in-demand skills

There are 11 modules in this course
Welcome to the Big Data Analytics course! By the end of this course, you will develop an understanding of the various technologies associated with Hadoop and the Spark ecosystem of tools and technologies. You will get hands-on experience working with core Hadoop components like MapReduce and Hadoop Distributed File System (HDFS). You will learn to write Pig scripts and Hive queries and extract data stored across Hadoop clusters. You will also learn about relational (SQL) and nonrelational (NoSQL) databases and discuss scenarios in which one is preferred over the other for data storage. You will also gain insight into the Spark ecosystem which makes running jobs across clusters very fast, thereby having several emerging applications. You will also learn a hands-on example of implementing and deploying a machine-learning application that handles streaming data on the cloud. This is an advanced-level course, intended for learners with a background using predictive tools and techniques, experience in writing basic Structured Query Language (SQL) queries, and an understanding of Python programming. The knowledge you gain from this course will help you make a career as a business analyst. You will gain skills to draw insights from data that has characteristics of high velocity, volume, and variety. The data with such characteristics is called big data and is increasingly being used by organizations for competitive advantage and decision-making. In this module, you will learn about Big Data applications and the various components of the Hadoop ecosystem. The module also discusses the MapReduce paradigm that facilitates distributed processing of data. You will also gain an insight into the HDFS and use it for storing files. Hands-on examples are provided using Hortonworks Data Platform Sandbox, which can be installed on a Windows/Mac computer with at least 8 GB of available RAM.
What's included
13 videos4 readings2 assignments1 discussion prompt
13 videos•Total 96 minutes
- Course Introduction•2 minutes
- Introduction to Big Data •7 minutes
- Data Types and Applications•4 minutes
- The Need and Evolution of Hadoop•5 minutes
- The Hadoop Ecosystem•7 minutes
- Hortonworks Data Platform Sandbox Installation (Desktop/Laptop)•9 minutes
- Hortonworks Data Platform Sandbox Installation (Google Cloud)•15 minutes
- The HDFS File System•6 minutes
- Hands-On with HDFS on HDP Sandbox (Desktop/Laptop)•10 minutes
- Hands-On with HDFS on HDP Sandbox (Google Cloud)•14 minutes
- Distributed Computing Using YARN•5 minutes
- Introduction to MapReduce •6 minutes
- Hands-On with MapReduce Using Python •7 minutes
4 readings•Total 180 minutes
- Essential Reading: Introduction to Big Data•60 minutes
- Recommended Reading: Introduction to Hadoop Ecosystem•30 minutes
- Essential Reading: Hands-On with Hadoop•60 minutes
- Recommended Reading: mrjob Python Library•30 minutes
2 assignments•Total 39 minutes
- Introduction to Big Data and Hadoop Ecosystem•24 minutes
- Hands-On with Hadoop•15 minutes
1 discussion prompt•Total 20 minutes
- Applications of Big Data Analytics•20 minutes
This assessment is a graded quiz based on the module covered in this week.
What's included
1 assignment
1 assignment•Total 60 minutes
- Graded Quiz: Introduction to Big Data and Hadoop•60 minutes
In this module, you will learn about the Hive scripting language and its usage for mining data from Hadoop clusters. Hive provides an SQL dialect called Hive Query Language (abbreviated HiveQL or just HQL) for querying data stored in a Hadoop cluster. Hive is most suited for data warehouse applications, where relatively static data is analyzed, fast response times are not required, and when the data is not changing rapidly. Hive makes it easier for developers to port SQL-based applications to Hadoop, compared with other Hadoop languages and tools. Like all SQL dialects in widespread use, it does not fully conform to any particular revision of the ANSI SQL standard. It is perhaps closest to MySQL’s dialect, but with significant differences. Hive supports several sizes of integer and floating-point types, a boolean type, and character strings of arbitrary length. Lastly, taking a real-world data set, you will load it in the Ambari environment for analysis using HDFS and HQL. You will go through the process of creating tables, loading data, and analyzing it using a Hive Query Language.
What's included
9 videos2 readings2 assignments1 discussion prompt
9 videos•Total 67 minutes
- Recap of Basic Concepts•6 minutes
- Introduction to Hive•6 minutes
- Hive Data Types•6 minutes
- HQL Commands and Uses•7 minutes
- HiveQL Data Definition and Manipulation•6 minutes
- Getting Started with Hive•11 minutes
- Using the Hive View on Ambari•8 minutes
- Practice Example on Hive•8 minutes
- Challenge: Hands-On•9 minutes
2 readings•Total 105 minutes
- Essential Reading: Introduction to Hive•15 minutes
- Essential Reading: Hands-On with Hive•90 minutes
2 assignments•Total 30 minutes
- Introduction to Hive•18 minutes
- Hands-On with Hive•12 minutes
1 discussion prompt•Total 15 minutes
- Introduction to HIVE•15 minutes
This assessment is a graded quiz based on the modules covered this week.
What's included
1 assignment
1 assignment•Total 60 minutes
- Graded Quiz: Introduction to Data Mining with Hive•60 minutes
In this module, you will learn about the Pig Latin scripting language and how you can leverage it to query big data on Hadoop clusters. You will also learn about the different data types and commands available in the Pig Latin language and how they can be used to define and manipulate data in the Hadoop ecosystem. Furthermore, you will be to work on a practical example of a publicly available data set to run Pig Latin scripts for data analysis.
What's included
7 videos2 readings2 assignments
7 videos•Total 57 minutes
- Introduction to Pig Latin•8 minutes
- Pig Data Types•7 minutes
- Pig Latin Commands and Uses•7 minutes
- Pig Data Definition and Manipulation•9 minutes
- Running Pig View on Ambari•6 minutes
- Example on Pig View•10 minutes
- Practice Problem as a Challenge•11 minutes
2 readings•Total 105 minutes
- Essential Reading: Introduction to Pig Language•15 minutes
- Recommended Reading: Hands-On with Pig•90 minutes
2 assignments•Total 30 minutes
- Introduction to Pig Language•24 minutes
- Hands-On with Pig•6 minutes
In this module, you will be introduced to the need for NoSQL databases. You will also get introduced to HBase, a NoSQL database, and its role in the Hadoop ecosystem. You will learn about the CAP theorem and how it affects the trade-offs between choosing the different NoSQL database options available on Hadoop. You will also learn about CAP consistency, availability, and partition tolerance in detail and how they affect our choice of technology to access and manipulate data on Hadoop. Lastly, you will get insights into other emerging cloud-based NoSQL solutions.
What's included
8 videos2 readings2 assignments1 discussion prompt
8 videos•Total 59 minutes
- Introduction to Data Warehouses•8 minutes
- Need for NoSQL Databases•8 minutes
- CAP Theorem•8 minutes
- Making a Choice of a Database•8 minutes
- Introduction to HBase•7 minutes
- Architecture of Hbase•8 minutes
- HBase data model•6 minutes
- Running and Setting Up Hbase on Ambari and Hands-On with Hbase•7 minutes
2 readings•Total 135 minutes
- Essential Reading: Introduction to NoSQL Databases•45 minutes
- Recommended Reading: Hands-On with HBase•90 minutes
2 assignments•Total 30 minutes
- Introduction to NoSQL Databases•15 minutes
- Hands-On with HBase•15 minutes
1 discussion prompt•Total 15 minutes
- Architecture of HBase•15 minutes
This assessment is a graded quiz based on the modules covered this week.
What's included
1 assignment
1 assignment•Total 60 minutes
- Graded Quiz: NoSQL Databases and the CAP Theorem•60 minutes
In this module, you will be introduced to the popular Apache Spark platform for Big Data processing. You will explore the key components of Apache Spark that provide significant benefits in distributed computing. You will also be introduced to the Resilient Distributed Datastores (RDD) and the Spark DataFrames. Furthermore, you will be introduced to Spark SQL and Spark Streaming.
What's included
11 videos4 readings2 assignments1 discussion prompt
11 videos•Total 70 minutes
- The Need for Spark•5 minutes
- Spark Background and Applications•6 minutes
- The Resilient Distributed Dataset (RDD)•7 minutes
- Hands-On with the PySpark Library in Python•8 minutes
- Working with Spark DataFrames and Spark SQL•5 minutes
- Hands-On with Structured Queries on Spark•7 minutes
- Need for Processing Streaming Data•5 minutes
- Introduction to Spark Streaming•6 minutes
- Hands-On with DStream API•7 minutes
- Structured Streaming•6 minutes
- Hands-On with Structured Streaming•6 minutes
4 readings•Total 360 minutes
- Essential Reading: Introduction to Spark•180 minutes
- Recommended Reading: Quick Start on Spark•60 minutes
- Essential Reading: Introduction to Spark Streaming•90 minutes
- Recommended Reading: Spark Structured Streaming•30 minutes
2 assignments•Total 30 minutes
- Introduction to the Building Blocks of Spark•15 minutes
- Introduction to Spark Streaming•15 minutes
1 discussion prompt•Total 20 minutes
- Windowing in Structured Streaming•20 minutes
This assessment is a graded quiz based on the module covered in this week.
What's included
1 assignment
1 assignment•Total 60 minutes
- Graded Quiz: Introduction to Spark•60 minutes
In this module, you will learn about MLlib, which is used for making predictions on large datasets that need distributed processing. You will be working on regression and classification tasks for large datasets. Then, a hands-on exercise with streaming data from the twitter API is implemented. This is a predictive streaming application to show participants an end-to-end big data scenario.
What's included
8 videos3 readings2 assignments
8 videos•Total 52 minutes
- Introduction to MLlib•5 minutes
- Regression Algorithms in Mllib•6 minutes
- Solving Classification Problems with Mllib•6 minutes
- Hands-On with Sentiment Analysis•8 minutes
- Introduction to Google Cloud Dataproc•5 minutes
- Hands-On setting up a cluster on Google Dataproc •8 minutes
- Streaming Data from Twitter API •7 minutes
- Hands-On with a Streaming Analytics Application•7 minutes
3 readings•Total 150 minutes
- Essential Reading: Introduction to ML on Spark•90 minutes
- Recommended Reading: Dataproc Best Practices Guide•30 minutes
- Recommended Reading: Twitter API v2•30 minutes
2 assignments•Total 27 minutes
- Machine Learning on Spark•15 minutes
- Running Hadoop and Spark on Cloud•12 minutes
Course Wrap-Up Video
What's included
1 video
1 video•Total 1 minute
- Course Wrap-up•1 minute
Build toward a degree
This course is part of the following degree program(s) offered by O.P. Jindal Global University. If you are admitted and enroll, your completed coursework may count toward your degree learning and your progress can transfer with you.¹
Build toward a degree
This course is part of the following degree program(s) offered by O.P. Jindal Global University. If you are admitted and enroll, your completed coursework may count toward your degree learning and your progress can transfer with you.¹
O.P. Jindal Global University
MBA in Business Analytics
Degree · 12 - 24 months
¹Successful application and enrollment are required. Eligibility requirements apply. Each institution determines the number of credits recognized by completing this content that may count towards degree requirements, considering any existing credits you may have. Click on a specific course for more information.
Instructor

Offered by

Offered by

O.P. Jindal Global University is recognised as an Institution of Eminence by the Ministry of Education, Government of India. It is also ranked the No. 1 Private University in India in the QS World University Rankings 2021. The university has 9000+ students across 12 schools that offer 52 degree programs. The university maintains a 1:9 faculty-student ratio. It is a research-intensive university, deeply committed to institutional values of interdisciplinary and innovative learning, pluralism and rigorous scholarship, globalism, and international engagement.
Why people choose Coursera for their career

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.
Explore more from Data Science

Course
Category: Credit offeredCredit offered
UUniversity of Pittsburgh
Course
Category: Credit offeredCredit offered
Course
Category: Credit offeredCredit offered
Course
Category: Credit offeredCredit offered