We propose a Semi-Supervised Learning (SSL) methodology that explicitly encodes different necessary priors to learn efficient representations for nodes in a network. The key to our framework is a semi-supervised cluster invariance constraint that explicitly groups nodes of similar labels together. We show that explicitly encoding this constraint allows one to learn meaningful node representations from both qualitative (visual) and quantitative standpoints. Specifically, our methodology realizes improved node classification and visually-enhanced clusterability of nodes on a wide range of datasets over competitive baselines.