Units
3.0 QUARTER UNITS

Course Description

Formerly: Data Engineering with Hadoop

Big Data platforms are distributed systems that can process large amounts of data across clusters of servers. They are being used across industries in internet startups and established enterprises. In this comprehensive course, you will get up to speed on the use of current Big Data platforms and gain insights into cloud-based Big Data architectures. We will cover Hadoop, Spark, Kafka and other Big Data platforms based on SQL, such as Hive.

The first half of the course includes an overview of the frameworks for MapReduce, Spark, Kafka, and Hive as well as some aspects of Python programming. You will learn how to write MapReduce/Spark jobs and how to optimize data processing applications and become familiar with SQL based tools for Big Data. We use Hive to build ETL jobs. The course also includes the fundamentals of NoSQL databases like HBase and Kafka.

The second half of the course covers stream processing capability and developing streaming applications with Apache Spark. You'll learn how to process large amounts of data using DataFrame, Apache Spark's structured data processing programming model that provides simple, powerful APIs. In addition to batch and iterative data processing, Apache Spark also supports stream processing, which enables companies to extract interesting and useful business insights at near real-time.

The course consists of interactive lectures, hands-on labs in class, and take home practice exercises. Upon completion of this course, you will possess a strong understanding of the tools used to build Big Data applications using MapReduce, Spark, and Hive.

 

Topics

  • Big Data applications architecture
  • Understanding Hadoop distributed file system (HDFS)
  • How MapReduce framework works
  • Introduction to HBase (Hadoop NoSQL database)
  • Introduction to Apache Kafka
  • Introduction to Spark and SparkSQL
  • Developing Spark/SparkSQL and Hive applications
  • Managing tables and query development in Hive
  • Introduction to data pipelines

Prerequisites / Skills Needed

 

  • Basic SQL skills and the ability to create simple programs in a modern programming language, like Python are required. An understanding of database, parallel or distributed computing is helpful.

 

Additional Information

This course uses AWS EMR and Databricks for Spark, Hive and HDFS programming. Students are required to have accounts with AWS and Databricks.

AI* - This course has students build AI agents to design and implement data pipelines as part of an introduction to data pipeline concepts. 

Currently no classes scheduled. Would you like to be notified when a class is available?

This course applies to these programs:

Demo