k-Nearest Neighbours algorithm

Looking for an algorithm to help you solve classification and regression problems? Look no further than the k-nearest neighbours (KNN) algorithm - a simple, supervised machine learning algorithm.
This tutorial will first cover the basics of:
- Supervised machine learning (vs unsupervised machine learning)
- Classification problems (output is a discrete value)
- Regression problems (output is a real number)
k-Nearest Neighbours
How does KNN work? KNN finds the distance between a query and data examples then selects the specified number of examples (k) closest to the query. In simpler words: KNN assumes that similar things exist in close proximity. The algorithm visualises the distance calculated on a graph.
Depending on whether you are using KNN for classification or regression, the algorithm will choose the most frequent label (classification) or the average of the label (regression).
The second section of the tutorial:
- takes you step by step through the KNN algorithm and implementation from scratch
- helps you choose the right value for ‘k’
- examines the advantages and disadvantages of KNN (or example, while the algorithm is easy to implement, it gets slower as the number of examples increases)
Real-life example
To help you understand the real-world applications of KNN, the tutorial then provides a concrete example of the algorithm in practice, examining how it can be used in recommender systems on platforms like Amazon, Medium, Netflix, and YouTube.