Differential privacy has emerged as one of the de-facto standards for measuring privacy risk when performing computations on sensitive data and disseminating the results. Many machine learning algorithms can be made differentially private through the judicious introduction of randomization, usually through noise, within the computation. Algorithms that guarantee differential privacy are randomized, which causes a loss in performance, or utility. Managing the privacy-utility tradeoff becomes easier with more data, but in several applications we are faced with many data holders, each with a smaller data set. Differential privacy can be used as a way to share private access to decentralized data, allowing researchers to perform studies with a much larger sample size. In this talk I will describe this setting, algorithms for differentially private decentralized learning, and potential applications for collaborative research in neuroimaging.