[Avg. reading time: 4 minutes]

CPU vs GPU

CPU: few powerful cores optimized for low-latency, branching, and general purpose tasks. Great for data orchestration, preprocessing, control flow.

Use cases in ML:

feature engineering, I/O, tokenization, small classical ML, control logic.

GPU: thousands of simpler cores optimized for massive parallel math, especially dense linear algebra. Great for matrix multiplies, convolutions, attention.

Orders-of-magnitude speedups for medium to large models and batches.

Use cases in ML:

deep learning training, embedding inference, vector search re-ranking, image and generative workloads.

CUDA

GPU is the hardware. CUDA (Compute Unified Device Architecture) is the framework / language and toolkit that unlocks that hardware. Its from nVidia.

When working with GPU, its a must to check whether CUDA is enabled.

There are bunch of GPU’s like Apple Silicon M-Series, Game consoles uses GPU but doesnt have CUDA.

Remember to change the Runtime

https://colab.research.google.com/drive/1byrDchiV4OWdLKOPl8H4UAcdbwFoR7aA?usp=sharing

#cpu #gpuVer 0.3.6

Last change: 2025-12-02