Filter by
SubjectRequired
LanguageRequired
The language used throughout the course, in both instruction and assessments.
Learning ProductRequired
LevelRequired
DurationRequired
SkillsRequired
SubtitlesRequired
EducatorRequired
Results for "pyspark queries"
- Status: NewStatus: Free Trial
Skills you'll gain: PySpark, Apache Hadoop, Apache Spark, Big Data, Apache Hive, Data Lakes, Data Pipelines, Data Processing, Data Import/Export, Data Integration, Linux Commands, Data Mapping, Linux, File Systems, Text Mining, Data Management, Distributed Computing, Relational Databases, Java, C++ (Programming Language)
- Status: NewStatus: Free Trial
Skills you'll gain: Databricks, CI/CD, Apache Spark, Microsoft Azure, Data Governance, Data Lakes, Data Architecture, Integration Testing, Real Time Data, PySpark, Data Pipelines, Data Integration, Data Management, Automation, Data Storage, Development Testing, Data Processing, Jupyter, Data Quality, User Provisioning
- Status: Free Trial
Skills you'll gain: Apache Hadoop, Apache Spark, PySpark, Apache Hive, Big Data, IBM Cloud, Kubernetes, Docker (Software), Scalability, Data Processing, Distributed Computing, Performance Tuning, Data Transformation, Debugging
- Status: Free Trial
Edureka
Skills you'll gain: PySpark, Data Pipelines, Data Processing, Data Visualization, Natural Language Processing, Pandas (Python Package), Feature Engineering, Machine Learning, Data Transformation, Supervised Learning, Text Mining, Deep Learning, Scalability, Regression Analysis
- Status: Preview
Edureka
Skills you'll gain: PySpark, Apache Spark, Distributed Computing, Apache Hadoop, Data Processing, Data Manipulation, Data Analysis, Exploratory Data Analysis, Python Programming
- Status: Free Trial
Skills you'll gain: SQL, Databases, Stored Procedure, Relational Databases, Database Design, Query Languages, Database Management, Data Analysis, Jupyter, Data Manipulation, Pandas (Python Package), Python Programming, Transaction Processing
What brings you to Coursera today?
- Status: Free Trial
Skills you'll gain: NoSQL, Apache Hadoop, Apache Spark, MongoDB, PySpark, Apache Hive, Databases, Apache Cassandra, Big Data, Machine Learning, Generative AI, IBM Cloud, Applied Machine Learning, Kubernetes, Supervised Learning, Distributed Computing, Docker (Software), Database Management, Data Pipelines, Scalability
- Status: Free Trial
Duke University
Skills you'll gain: PySpark, Snowflake Schema, Databricks, Data Pipelines, Apache Spark, MLOps (Machine Learning Operations), Apache Hadoop, Big Data, Data Warehousing, Data Quality, Data Integration, Data Processing, DevOps, Data Transformation, SQL, Python Programming
- Status: NewStatus: Free Trial
Skills you'll gain: PySpark, Apache Hadoop, Apache Spark, Big Data, Apache Hive, Data Processing, Data Mapping, Text Mining, Distributed Computing, Debugging, Scripting Languages, Java Programming
- Status: Free Trial
Skills you'll gain: Relational Databases, Database Design, SQL, Database Management, Databases, Query Languages, Data Analysis, Exploratory Data Analysis, Data Science, R Programming, Data Manipulation, Data Modeling, Data Access
- Status: NewStatus: Free Trial
Skills you'll gain: Databricks, Apache Spark, Microsoft Azure, PySpark, Data Lakes, Data Processing, Jupyter, File Systems, File Management, Cloud Storage, Cloud Computing Architecture
- Status: NewStatus: Preview
Skills you'll gain: SQL, Version Control, Git (Version Control System), MySQL, Collaborative Software, Query Languages, Relational Databases, Data Access, Jupyter
In summary, here are 10 of our most popular pyspark queries courses
- Hadoop and Spark Fundamentals:Â Pearson
- Mastering Azure Databricks for Data Engineers:Â Packt
- Introduction to Big Data with Spark and Hadoop:Â IBM
- PySpark for Data Science:Â Edureka
- Introduction to PySpark:Â Edureka
- Databases and SQL for Data Science with Python:Â IBM
- NoSQL, Big Data, and Spark Foundations:Â IBM
- Spark, Hadoop, and Snowflake for Data Engineering:Â Duke University
- Hadoop and Spark Fundamentals: Unit 2:Â Pearson
- SQL for Data Science with R:Â IBM