Contrastive learning is an effective way of learning visual representations in a self-supervised manner. Pushing the embeddings of two transformed versions of the same image (forming the positive pair) close to each other and further apart from the embedding of any other image (negatives) using a contrastive loss, leads to powerful and transferable representations. We demonstrate that harder negatives are needed to facilitate better and faster learning for contrastive self-supervised learning and propose ways of synthesizing harder negative features, on-the-fly and with minimal computational overhead.