Consider an agency holding a large database of sensitive personal information -- medical records, census survey answers, web search records, or genetic data, for example. The agency would like to discover and publicly release global characteristics of the data (say, to inform policy and business decisions) while protecting the privacy of individuals' records. This problem is known variously as "statistical disclosure control", "privacy-preserving data mining" or "private data analysis". We will begin by discussing what makes this problem difficult, and exhibit some of the problems that plague simple attempts at anonymization. Motivated by this, we will discuss differential privacy, a rigorous definition of privacy in statistical databases that has received significant recent attention. We will survey some basic techniques for designing differentially private algorithms and conclude by laying out some major challenges facing researchers in this area.