About this Course
30,699 recent views

100% online

Start instantly and learn at your own schedule.

Flexible deadlines

Reset deadlines in accordance to your schedule.

Intermediate Level


Subtitles: English

Skills you will gain

DataflowParallel ComputingJava ConcurrencyData Parallelism

100% online

Start instantly and learn at your own schedule.

Flexible deadlines

Reset deadlines in accordance to your schedule.

Intermediate Level


Subtitles: English

Syllabus - What you will learn from this course

1 hour to complete

Welcome to the Course!

Welcome to Parallel Programming in Java! This course is designed as a three-part series and covers a theme or body of knowledge through various video lectures, demonstrations, and coding projects.

1 video (Total 1 min), 5 readings, 1 quiz
1 video
5 readings
General Course Info5m
Course Icon Legend5m
Discussion Forum Guidelines5m
Pre-Course Survey10m
Mini Project 0: Setup10m
4 hours to complete

Task Parallelism

In this module, we will learn the fundamentals of task parallelism. Tasks are the most basic unit of parallel programming. An increasing number of programming languages (including Java and C++) are moving from older thread-based approaches to more modern task-based approaches for parallel programming. We will learn about task creation, task termination, and the “computation graph” theoretical model for understanding various properties of task-parallel programs. These properties include work, span, ideal parallelism, parallel speedup, and Amdahl’s Law. We will also learn popular Java APIs for task parallelism, most notably the Fork/Join framework.

7 videos (Total 42 min), 6 readings, 2 quizzes
7 videos
1.2 Tasks in Java's Fork/Join Framework5m
1.3 Computation Graphs, Work, Span7m
1.4 Multiprocessor Scheduling, Parallel Speedup8m
1.5 Amdahl's Law5m
ReciprocalArraySum using Async-Finish (Demo)4m
ReciprocalArraySum using RecursiveAction's in Java's Fork/Join Framework (Demo)5m
6 readings
1.1 Lecture Summary5m
1.2 Lecture Summary5m
1.3 Lecture Summary5m
1.4 Lecture Summary5m
1.5 Lecture Summary5m
Mini Project 1: Reciprocal-Array-Sum using the Java Fork/Join Framework10m
1 practice exercise
Module 1 Quiz30m
4 hours to complete

Functional Parallelism

Welcome to Module 2! In this module, we will learn about approaches to parallelism that have been inspired by functional programming. Advocates of parallel functional programming have argued for decades that functional parallelism can eliminate many hard-to-detect bugs that can occur with imperative parallelism. We will learn about futures, memoization, and streams, as well as data races, a notorious class of bugs that can be avoided with functional parallelism. We will also learn Java APIs for functional parallelism, including the Fork/Join framework and the Stream API’s.

7 videos (Total 40 min), 6 readings, 2 quizzes
7 videos
2.2 Futures in Java's Fork/Join Framework5m
2.3 Memoization6m
2.4 Java Streams5m
2.5 Data Races and Determinism9m
ReciprocalArraySum using RecursiveTask's in Java's Fork/Join Framework (Demo)3m
Parallel List Processing Using Java Streams (Demo)4m
6 readings
2.1 Lecture Summary10m
2.2 Lecture Summary10m
2.3 Lecture Summary10m
2.4 Lecture Summary10m
2.5 Lecture Summary10m
Mini Project 2: Analyzing Student Statistics Using Java Parallel Streams10m
1 practice exercise
Module 2 Quiz30m
23 minutes to complete

Talking to Two Sigma: Using it in the Field

Join Professor Vivek Sarkar as he talks with Two Sigma Managing Director, Jim Ward, and Software Engineers, Margaret Kelley and Jake Kornblau, at their downtown Houston, Texas office about the importance of parallel programming.

2 videos (Total 13 min), 1 reading
2 videos
Industry Professionals on Parallelism - Jake Kornblau and Margaret Kelley, Software Engineers6m
1 reading
About these Talks10m
4 hours to complete

Loop Parallelism

Welcome to Module 3, and congratulations on reaching the midpoint of this course! It is well known that many applications spend a majority of their execution time in loops, so there is a strong motivation to learn how loops can be sped up through the use of parallelism, which is the focus of this module. We will start by learning how parallel counted-for loops can be conveniently expressed using forall and stream APIs in Java, and how these APIs can be used to parallelize a simple matrix multiplication program. We will also learn about the barrier construct for parallel loops, and illustrate its use with a simple iterative averaging program example. Finally, we will learn the importance of grouping/chunking parallel iterations to reduce overhead.

7 videos (Total 41 min), 6 readings, 2 quizzes
7 videos
3.2 Parallel Matrix Multiplication4m
3.3 Barriers in Parallel Loops5m
3.4 Parallel One-Dimensional Iterative Averaging8m
3.5 Iteration Grouping/Chunking in Parallel Loops6m
Parallel Matrix Multiplication (Demo)4m
Parallel One-Dimensional Iterative Averaging (Demo)5m
6 readings
3.1 Lecture Summary10m
3.2 Lecture Summary10m
3.3 Lecture Summary10m
3.4 Lecture Summary10m
3.5 Lecture Summary10m
Mini Project 3: Parallelizing Matrix-Matrix Multiply Using Loop Parallelism10m
1 practice exercise
Module 3 Quiz30m
5 hours to complete

Data flow Synchronization and Pipelining

Welcome to the last module of the course! In this module, we will wrap up our introduction to parallel programming by learning how data flow principles can be used to increase the amount of parallelism in a program. We will learn how Java’s Phaser API can be used to implement “fuzzy” barriers, and also “point-to-point” synchronizations as an optimization of regular barriers by revisiting the iterative averaging example. Finally, we will also learn how pipeline parallelism and data flow models can be expressed using Java APIs.

7 videos (Total 38 min), 7 readings, 2 quizzes
7 videos
4.2 Point-to-Point Sychronization with Phasers4m
4.3 One-Dimensional Iterative Averaging with Phasers4m
4.4 Pipeline Parallelism5m
4.5 Data Flow Parallelism5m
Phaser Examples6m
Pipeline & Data Flow Parallelism7m
7 readings
4.1 Lecture Summary10m
4.2 Lecture Summary10m
4.3 Lecture Summary10m
4.4 Lecture Summary10m
4.5 Lecture Summary10m
Mini Project 4: Using Phasers to Optimize Data-Parallel Applications10m
Exit Survey10m
1 practice exercise
Module 4 Quiz30m
20 minutes to complete

Continue Your Journey with the Specialization "Parallel, Concurrent, and Distributed Programming in Java"

The next two videos will showcase the importance of learning about Concurrent Programming and Distributed Programming in Java. Professor Vivek Sarkar will speak with industry professionals at Two Sigma about how the topics of our other two courses are utilized in the field.

2 videos (Total 10 min), 1 reading
2 videos
Industry Professional on Distribution - Dr. Eric Allen, Senior Vice President, Two Sigma6m
1 reading
Our Other Course Offerings10m
126 ReviewsChevron Right


started a new career after completing these courses


got a tangible career benefit from this course

Top reviews from Parallel Programming in Java

By LGDec 13th 2017

This is a great course in parallel programming. The videos were very clear, summaries reinforced the video material and the programming projects and quizzes were challenging but not overwhelming.

By SVAug 28th 2017

Great course. Introduces Parallel Programming in Java in a gentle way.\n\nKudos to Professor Vivek Sarkar for simplifying complex concepts and presenting them in an elegant manner.



Vivek Sarkar

Department of Computer Science

About Rice University

Rice University is consistently ranked among the top 20 universities in the U.S. and the top 100 in the world. Rice has highly respected schools of Architecture, Business, Continuing Studies, Engineering, Humanities, Music, Natural Sciences and Social Sciences and is home to the Baker Institute for Public Policy....

About the Parallel, Concurrent, and Distributed Programming in Java Specialization

Parallel, concurrent, and distributed programming underlies software in multiple domains, ranging from biomedical research to financial services. This specialization is intended for anyone with a basic knowledge of sequential programming in Java, who is motivated to learn how to write parallel, concurrent and distributed programs. Through a collection of three courses (which may be taken in any order or separately), you will learn foundational topics in Parallelism, Concurrency, and Distribution. These courses will prepare you for multithreaded and distributed programming for a wide range of computer platforms, from mobile devices to cloud computing servers. To see an overview video for this Specialization, click here! For an interview with two early-career software engineers on the relevance of parallel computing to their jobs, click here. Acknowledgments The instructor, Prof. Vivek Sarkar, would like to thank Dr. Max Grossman for his contributions to the mini-projects and other course material, Dr. Zoran Budimlic for his contributions to the quizzes, Dr. Max Grossman and Dr. Shams Imam for their contributions to the pedagogic PCDP library used in some of the mini-projects, and all members of the Rice Online team who contributed to the development of the course content (including Martin Calvi, Annette Howe, Seth Tyger, and Chong Zhou)....
Parallel, Concurrent, and Distributed Programming in Java

Frequently Asked Questions

  • Once you enroll for a Certificate, you’ll have access to all videos, quizzes, and programming assignments (if applicable). Peer review assignments can only be submitted and reviewed once your session has begun. If you choose to explore the course without purchasing, you may not be able to access certain assignments.

  • When you enroll in the course, you get access to all of the courses in the Specialization, and you earn a certificate when you complete the work. Your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile. If you only want to read and view the course content, you can audit the course for free.

More questions? Visit the Learner Help Center.