Skip to main content

HPC Performance Optimization: Scaling, Profiling, and Tuning

Mastering HPC performance — Amdahl's Law, Gustafson's Law, strong vs weak scaling, roofline model, communication-computation overlap, load balancing, and profiling with Nsight and VTune.

Best viewed on desktop for optimal interactive experience

Why Performance Optimization Matters

HPC clusters are expensive. GPU-hours cost real money — whether you’re paying a cloud provider or amortizing hardware purchases. Every percentage point of efficiency you recover translates directly to more science, more training runs, or fewer dollars spent.

The difference between 60% and 90% parallel efficiency on a 1000-GPU cluster is staggering. At 2/GPUhour,a302/GPU-hour, a 30% efficiency gap on a week-long training run wastes over **100,000**. Performance optimization isn’t a nice-to-have — it’s the difference between feasible and infeasible at scale.

This concept covers the theoretical foundations and practical tools you need to understand, measure, and improve parallel performance.

Amdahl’s Law: The Serial Bottleneck

Amdahl’s Law quantifies the maximum speedup achievable by parallelizing a program, given that some fraction of the work is inherently serial.

S(p) = 1s + 1-sp

Where s is the serial fraction (the portion of work that cannot be parallelized) and p is the number of processors.

The cruel math: even with infinite processors, speedup is bounded by 1/s. If 5% of your code is serial, the maximum speedup is 20x — no matter how many thousands of processors you throw at the problem. If 10% is serial, you cap at 10x.

This has profound implications. It means that optimizing the serial portion of your code is often far more valuable than adding more processors. A developer who reduces the serial fraction from 10% to 5% has doubled the theoretical maximum speedup — something no amount of hardware can achieve otherwise.

Gustafson’s Law: Scale the Problem

Gustafson’s Law offers a more optimistic perspective by reframing the question. Instead of asking “how fast can we solve a fixed problem?” it asks “how much more work can we do in fixed time?”

Ss(p) = p - s · (p - 1)

The key insight: as you add processors, solve bigger problems, not just the same problem faster. In practice, scientists and engineers almost always scale up their problem size when given more resources — higher resolution simulations, larger training datasets, more parameter sweeps.

Under Gustafson’s model, the serial fraction s is measured relative to the total (scaled) workload. As the parallel portion grows with processor count, the serial overhead becomes a smaller and smaller fraction of the total. Speedup grows nearly linearly with processor count.

Strong vs Weak Scaling

These two laws correspond to two distinct ways of measuring parallel performance.

Strong Scaling

Fix the problem size, add more processors, and measure how much faster the computation completes. This is the Amdahl’s Law perspective. Strong scaling is what you care about when the problem size is fixed — for example, training a specific model architecture on a specific dataset.

Ideal strong scaling means halving the time when you double the processors. In practice, communication overhead and serial bottlenecks cause the curve to flatten.

Weak Scaling

Fix the problem size per processor, add more processors, and measure whether the total time stays constant. This is the Gustafson’s Law perspective. Weak scaling matters when you want to solve larger problems — for example, increasing batch size proportionally with GPU count.

Ideal weak scaling means constant execution time regardless of processor count. Deviations indicate communication or synchronization overhead growing with scale.

Which Scaling to Report?

Training a fixed model on more GPUs = strong scaling. Training a larger model (or using a proportionally larger batch) on more GPUs = weak scaling. Most DL papers report strong scaling, but weak scaling is often more relevant for production workloads where you scale both compute and data.

The Roofline Model

The roofline model helps you determine whether your kernel is compute-bound or memory-bound — and therefore where to focus optimization effort.

It defines two performance ceilings:

  • Peak compute: the maximum FLOPS the hardware can deliver (e.g., 312 TFLOPS for an A100 in FP16)
  • Peak memory bandwidth: the maximum data throughput (e.g., 2 TB/s for A100 HBM2e)

The key metric is operational intensity — the ratio of compute operations to bytes accessed:

\text{Operational Intensity} = \text{FLOPS}\text{Bytes transferred}

If your kernel’s operational intensity is low (few operations per byte), you hit the bandwidth ceiling first — the kernel is memory-bound. Optimization should focus on data access patterns, caching, and reducing memory traffic.

If operational intensity is high (many operations per byte), you hit the compute ceiling — the kernel is compute-bound. Optimization should focus on instruction-level parallelism, vectorization, and utilizing tensor cores.

A critical insight for deep learning practitioners: most DL training kernels are memory-bandwidth-bound, not compute-bound. Matrix multiplications are the exception — attention layers, normalization layers, activation functions, and element-wise operations are almost always limited by how fast you can feed data to the compute units.

Communication-Computation Overlap

In distributed training, the single biggest performance killer is communication latency. Every time you synchronize gradients across GPUs or nodes, computation stalls while data moves through the network.

The solution is to overlap communication with computation. While one layer’s gradients are being communicated, the next layer’s backward pass is still computing.

The technique works because the backward pass processes layers in reverse order. As soon as the final layer’s gradients are computed, their all-reduce can begin while the second-to-last layer is still computing:

Layer N backward: [==compute==] Layer N all-reduce: [===communicate===] Layer N-1 backward: [==compute==] Layer N-1 all-reduce: [===communicate===] Layer N-2 backward: [==compute==]

In practice, this is implemented with non-blocking collectives: MPI_Iallreduce in MPI, or NCCL’s asynchronous operations. PyTorch’s DistributedDataParallel does this automatically by bucketing gradients and launching all-reduce as soon as each bucket is ready.

The ideal scenario is when communication time is fully hidden behind computation. This requires that backward pass compute time per layer exceeds the all-reduce time for that layer’s gradients.

Load Balancing

Even with perfect scaling laws and overlapped communication, performance suffers if work is distributed unevenly across processors.

Static Partitioning

Divide work evenly upfront. Works well when the cost of each unit of work is predictable and uniform. Domain decomposition in CFD simulations or evenly-sized mini-batches in data-parallel training are classic examples.

Dynamic Scheduling

When work costs vary, static partitioning leads to stragglers — fast workers idle while slow ones finish. Dynamic approaches include work stealing (idle processors take tasks from busy ones) and centralized task queues.

DL-Specific Challenges

Deep learning introduces unique load balancing problems:

  • Variable-length sequences: In NLP, different sequences in a batch have different lengths. Without padding or bucketing, some GPUs process more tokens than others.
  • Heterogeneous GPUs: Mixed clusters (e.g., A100 + V100) mean different processors finish at different rates. The fast GPUs wait for the slow ones at every synchronization point.
  • Pipeline parallelism bubbles: The startup and teardown phases of pipeline parallelism leave some stages idle, reducing overall utilization.

Profiling Tools

Optimization without measurement is guesswork. These tools turn guesswork into engineering.

Nsight Systems

NVIDIA’s system-wide profiler. Shows a timeline view of GPU utilization, kernel launches, memory transfers, CPU activity, and communication operations. This is your first stop — it reveals the big picture: where time is spent, where gaps exist, and how GPU and CPU activities interleave.

nsys profile --trace=cuda,nvtx,osrt python train.py

Nsight Compute

NVIDIA’s per-kernel profiler. Once Nsight Systems identifies a bottleneck kernel, Nsight Compute digs into its details: occupancy, memory throughput, instruction mix, warp stall reasons, and register usage. It tells you why a kernel is slow.

ncu --target-processes all python train.py

VTune

Intel’s CPU profiler. Essential for CPU-side bottlenecks: threading analysis, memory access patterns, cache behavior, and vectorization efficiency. Particularly useful for data loading pipelines and preprocessing.

perf

Linux kernel profiler. Lightweight and always available. Captures cache misses, branch mispredictions, context switches, and other hardware performance counters. Useful for quick sanity checks.

perf stat -e cache-misses,cache-references,instructions python preprocess.py

Profile First, Optimize Second

The bottleneck is rarely where you think it is. Developers routinely spend days optimizing a kernel that accounts for 2% of total runtime while ignoring a data loading pipeline that accounts for 40%. Always profile before optimizing.

Common Pitfalls

1. Optimizing compute when you’re memory-bound

The roofline model exists for a reason. If your kernel is bottlenecked by memory bandwidth, no amount of instruction-level optimization will help. Check your operational intensity first.

2. Ignoring communication overhead in scaling estimates

Amdahl’s Law doesn’t account for the communication overhead that grows with processor count. Real scaling is often worse than Amdahl predicts because all-reduce time increases with the number of participants.

3. Not profiling before optimizing

Intuition about performance is unreliable. A five-minute profiling session with Nsight Systems can save days of misguided optimization effort.

4. Assuming linear scaling without measuring

“We added 4x more GPUs so it should be 4x faster” is a common and costly assumption. Always measure actual scaling and compare against theoretical predictions.

Key Takeaways

  1. Amdahl’s Law sets hard limits — the serial fraction determines maximum speedup, regardless of processor count.

  2. Gustafson’s Law offers hope — scale the problem with the processors and the serial fraction shrinks relative to total work.

  3. Know your bottleneck — the roofline model separates compute-bound kernels from memory-bound ones, guiding where to focus effort.

  4. Overlap communication — hide latency by computing gradients for one layer while communicating another’s.

  5. Profile first — measure before optimizing. The bottleneck is rarely obvious and intuition is frequently wrong.

Further Reading

If you found this explanation helpful, consider sharing it with others.

Mastodon