Big Data for Data Engineers Specialization

Starts Dec 18

Big Data for Data Engineers Specialization

Big Data for Data Engineers Specialization

Build Your Data Engineering Skills. Learn how to tame the big data beast with the most popular tools assisted by top-notch practitioners

About This Specialization

This specialization is made for people working with data (either small or big). If you are a Data Analyst, Data Scientist, Data Engineer or Data Architect (or you want to become one) — don’t miss the opportunity to expand your knowledge and skills in the field of data engineering and data analysis on the large scale. In four concise courses you will learn the basics of Hadoop, MapReduce, Spark, methods of offline data processing for warehousing, real-time data processing and large-scale machine learning. And Capstone project for you to build and deploy your own Big Data Service (make your portfolio even more competitive). Over the course of the specialization, you will complete progressively harder programming assignments (mostly in Python). Make sure, you have some experience in it. This course will master your skills in designing solutions for common Big Data tasks: - creating batch and real-time data processing pipelines, - doing machine learning at scale, - deploying machine learning models into a production environment — and much more! Join some of best hands-on big data professionals, who know, their job inside-out, to learn the basics, as well as some tricks of the trade, from them. Special thanks to: Prof. Mikhail Roytberg (APT dept., MIPT), Oleg Sukhoroslov (PhD, Senior Researcher, IITP RAS), Oleg Ivchenko (APT dept., MIPT), Pavel Akhtyamov (APT dept., MIPT), Vladimir Kuznetsov, Asya Roitberg, Eugene Baulin, Marina Sudarikova.

Created by:

5 courses

Follow the suggested order or choose your own.


Designed to help you practice and apply the skills you learn.


Highlight your new skills on your resume or LinkedIn.

Projects Overview

Intermediate Specialization.
Some related experience required.
  1. COURSE 1

    Big Data Essentials: HDFS, MapReduce and Spark RDD

    Upcoming session: Dec 18

    About the Course

    Have you ever heard about such technologies as HDFS, MapReduce, Spark? Always wanted to learn these new tools but missed concise starting material? Don’t miss this course either! In this 6-week course you will: - learn some basic technologies of th
  2. COURSE 2

    Big Data Analysis: Hive, Spark SQL, DataFrames and GraphFrames

    Upcoming session: Dec 18

    About the Course

    No doubt working with huge data volumes is hard, but to move a mountain, you have to deal with a lot of small stones. But why strain yourself? Using Mapreduce and Spark you tackle the issue partially, thus leaving some space for high-level tool
  3. COURSE 3

    Big Data Applications: Machine Learning at Scale

    Upcoming session: Dec 18

    About the Course

    Machine learning is transforming the world around us. To become successful, you’d better know what kinds of problems can be solved with machine learning, and how they can be solved. Don’t know where to start? The answer is one button away. During thi
  4. COURSE 4

    Big Data Applications: Real-Time Streaming

    Starts November 2017

    About the Course

    From the practical point of view, data freshness is of utmost importance. Advertisements based on the recent users’ activity are ten times more popular. Delays in traffic jam prediction cost extra time for customers and make them angry. Delays in tsunami prediction can cost people’s lives. The first half of the course is devoted to streaming frameworks that allow building continuous data applications to process massive amounts of data within seconds. You will learn techniques, technologies and algorithms suitable for building near real-time applications. In the second part of the course, you will learn how to integrate these technologies into your applications. We will explore common strategies for improving data freshness in web applications (by building your own sample application on Flask). As a bonus, you will learn about NoSQL databases, how to work with them and when to use them instead of traditional RDBMS systems. Are you ready for the journey full of research surprises? Jump in!
  5. COURSE 5

    Big Data Services: Capstone Project

    Starts December 2017

    About the Course

    Are you ready to close the loop on your Big Data skills? Do you want to apply all your knowledge you got from the previous courses in practice? Finally, in the Capstone project, you will integrate all the knowledge acquired earlier to build a real application leveraging the power of Big Data. You will be given a task to combine data from different sources of different types (static distributed dataset, streaming data, SQL or NoSQL storage). Combined, this data will be used to build a predictive model for a financial market (as an example). First, you design a system from scratch and share it with your peers to get valuable feedback. Second, you can make it public, so get ready to receive the feedback from your service users. Real-world experience without any 3G-glasses or mock interviews.


  • Yandex


    Yandex is a technology company that builds intelligent products and services powered by machine learning. Our goal is to help consumers and businesses better navigate the online and offline world.

  • Pavel Klemenkov

    Pavel Klemenkov

    Head of Machine Learning
  • Alexey A. Dral

    Alexey A. Dral

    Head of Big Data and Machine Learning
  • Pavel Mezentsev

    Pavel Mezentsev

    Senior Data Scientist
  • Ilya Trofimov

    Ilya Trofimov

    Principal Data Scientist
  • Emeli Dral

    Emeli Dral

  • Natalia Pritykovskaya

    Natalia Pritykovskaya

  • Ivan Puzyrevskiy

    Ivan Puzyrevskiy

    Technical Team Lead
  • Vladimir Lesnichenko

    Vladimir Lesnichenko

  • Evgeniy Ryabenko

    Evgeniy Ryabenko

    Senior Data Scientist, Veon, Amsterdam