We consider the problem of Active Search, where a maximum of relevant objects – ideally all relevant objects – should be retrieved with the minimum effort or minimum time. Typically, there are two main challenges to face when tackling this problem: first, the class of relevant objects has often low prevalence and, secondly, this class can be multi-faceted or multi-modal: objects could be relevant for completely different reasons. To solve this problem and its associated issues, we propose an approach based on a non-stationary (aka restless) extension of Thompson Sampling, a well-known strategy for Multi-Armed Bandits problems. The collection is first soft-clustered into a finite set of components and a posterior distribution of getting a relevant object inside each cluster is updated after receiving the user feedback about the proposed instances. The “next instance” selection strategy is a mixed, two-level decision process, where both the soft clusters and their instances are considered. This method can be considered as an insurance, where the cost of the insurance is an extra exploration effort in the short run, for achieving a nearly “total” recall with less efforts in the long run.