The course begins with measurement and tuning concepts. It reviews how the components of Linux kernel (scheduler, network and IO stacks) and application API (with asynchronous and multi-threaded programming) interact and work together seamlessly as scalable solutions. You will learn how to identify resource contention issues resulting in lower throughput and higher latencies. You’ll also learn how to use the Linux resource management framework (Cgroups, containers) and server virtualization technologies to improve agility in resource provisioning. Additionally, you’ll gain experience simulating production workload for problem isolation and benchmarking.
You will gain hands on experience using the rich set of monitoring and tracing tools available in Linux, including pidstat, iotop, fio, and sysbench, as well as advanced tools to perform full software stack analysis such as systemtap, perf and sysdig. Students will also be exposed to the key cloud technologies such as data sharding, auto-scaling, Service Oriented Architecture (SOA) and the DevOps model, which allow companies to deploy cloud native services to provide new services at a scale not possible in the data center-based environment.
- Linux performance matrices, management and tuning principles
- Linux kernel (scheduler, network and IO stacks)
- Application API (with asynchronous and multi-threaded programming)
- How to use Linux performance monitoring and tracing tools and interpret results
- How to simulate production workload for problem isolation and benchmarking
- Finding performance bottlenecks and application latencies via advanced tool sets
- Industry trends: data sharding and auto-scaling in public and private cloud
NOTE: Students are required to bring their own laptops to do labs in class.