Coronavirus (COVID-19) Update
Enjoy a fully remote summer of professional education at the Silicon Valley Campus and the re-emergence of in-person courses in September. Visit our COVID update page.
Data Engineering with Hadoop | DBDA.X424
Big Data platforms are distributed systems that can process large amounts of data across clusters of servers. They are being used across industries in internet startups and established enterprises. In this comprehensive introductory course, you will get up to speed on the use of current Big Data platforms and gain insights into cloud-based Big Data architectures. We will cover Hadoop, Spark and other Big Data platforms based on SQL, such as Hive.
The first half of the course includes an overview of the frameworks for MapReduce and Spark. You will learn how to write MapReduce/Spark jobs and how to optimize data processing applications. The second half of the course covers SQL based tools for Big Data. We use Hive to build ETL jobs. The course also includes the fundamentals of NoSQL databases like HBase and Kafka.
The course consists of interactive lectures, hands-on labs in class, and take home practice exercises. Upon completion of this course, you will possess a strong understanding of the tools used to build Big Data applications using MapReduce, Spark, and Hive.
At the conclusion of the course, you should be able to:
- Describe the role Hadoop plays in the analysis of big data
- Discuss the inner workings of Hadoop's computing framework, including MapReduce processing and Hadoop's file system (HDFS)
- Develop MapReduce applications - with exposure to both traditional MR2 and Spark
- Use Hive and NOSQL databases for data analysis
- Leverage the Hadoop ecosystem to become productive in analyzing data
- Big Data applications architecture
- Understanding Hadoop distributed file system (HDFS)
- How MapReduce framework works
- Introduction to HBase (Hadoop NoSQL database)
- Introduction to Apache Kafka
- Developing MapReduce applications
- Introduction to Spark and SparkSQL
- Developing Spark/SparkSQL applications
- Managing tables and query development in Hive
- Introduction to data pipelines
Note(s): This course uses EMR Hadoop distribution. Students are required to bring laptops—with 64bit CPU and a minimum of 8GB of memory—to class.
Skills Needed: Basic SQL skills and the ability to create simple programs in a modern programming language are required. An understanding of database, parallel or distributed computing is helpful.
Course Availability Notification
Please use this form to be notified when this course is open for enrollment.