The advent of pre-trained generative models, particularly large language models, has revolutionized the field of NLP and beyond. However, these models tend to reproduce the biases present in their training data. Also, they may overlook important features of the data that are challenging to capture, such as guaranteeing the compilability of generated code even when trained on clean code. To address these limitations, researchers have introduced distributional control techniques. These techniques, not limited to language, allow controlling the prevalence (i.e. expectations) of any features of interest in the model’s generated outputs. Despite their potential, the widespread adoption of these techniques has been hindered by the difficulty in adapting the existing complex, disconnected code. Here, we present disco, a Python toolkit that brings these techniques to the wider public. We release disco as an open-source library.