Today, I explored hierarchical clustering, a method that organizes data into a hierarchy of nested groups, visually represented as a dendrogram or tree. This technique can be implemented through two different approaches:
Firstly, Agglomerative Hierarchical Clustering begins by treating each data point as a separate cluster. So, with N data points, there are initially N clusters. The process involves repeatedly merging the closest pair of clusters until all data points are united into a single cluster. This method follows a bottom-up strategy.
Secondly, Divisive Hierarchical Clustering, which is the opposite of the agglomerative approach. It starts with all data points in a single cluster and progressively splits this cluster. The splitting continues until each data point becomes its own cluster.